Test Report: Docker_Linux 22301

                    
                      0d35a49bec4a61cedce3e0d18d834e0a802a30f5:2025-12-23:42940
                    
                

Test fail (36/436)

Order failed test Duration
172 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy 497.72
174 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart 367.33
176 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods 1.78
186 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd 1.84
187 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly 1.85
188 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig 731.32
189 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth 1.76
192 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService 0.04
195 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd 2.26
198 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd 2.71
202 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect 2.04
204 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim 241.47
208 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL 2.24
214 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels 1.22
219 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv/bash 0.75
224 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel 0.34
227 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup 0.06
228 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect 108.74
229 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp 0.05
230 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List 0.28
231 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput 0.27
232 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS 0.3
233 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format 0.26
234 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL 0.27
238 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port 2.69
367 TestKubernetesUpgrade 782.74
416 TestStartStop/group/no-preload/serial/FirstStart 503.18
445 TestStartStop/group/newest-cni/serial/FirstStart 498.37
464 TestStartStop/group/no-preload/serial/DeployApp 2.55
465 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 110.79
475 TestStartStop/group/no-preload/serial/SecondStart 370.7
496 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 93.07
514 TestStartStop/group/newest-cni/serial/SecondStart 372.9
520 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 542.91
524 TestStartStop/group/newest-cni/serial/Pause 6.98
525 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 271.17
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy (497.72s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy
functional_test.go:2244: (dbg) Run:  out/minikube-linux-amd64 start -p functional-384766 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-rc.1
E1222 22:43:14.878966   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/addons-268945/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 22:45:31.033066   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/addons-268945/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 22:45:58.726253   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/addons-268945/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 22:46:30.659414   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-580825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 22:46:30.668202   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-580825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 22:46:30.679130   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-580825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 22:46:30.699439   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-580825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 22:46:30.739745   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-580825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 22:46:30.820130   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-580825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 22:46:30.980631   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-580825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 22:46:31.301255   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-580825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 22:46:31.942273   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-580825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 22:46:33.222787   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-580825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 22:46:35.784552   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-580825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 22:46:40.905277   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-580825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 22:46:51.146275   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-580825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 22:47:11.627041   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-580825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 22:47:52.588104   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-580825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 22:49:14.510812   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-580825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 22:50:31.033633   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/addons-268945/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2244: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-384766 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-rc.1: exit status 109 (8m16.472465193s)

                                                
                                                
-- stdout --
	* [functional-384766] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22301
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22301-72233/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-72233/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "functional-384766" primary control-plane node in "functional-384766" cluster
	* Pulling base image v0.0.48-1766394456-22288 ...
	* Found network options:
	  - HTTP_PROXY=localhost:34075
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:34075 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:34075 to docker env.
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-384766 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-384766 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001109614s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000802765s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000802765s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:2246: failed minikube start. args "out/minikube-linux-amd64 start -p functional-384766 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-rc.1": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-384766
helpers_test.go:244: (dbg) docker inspect functional-384766:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c",
	        "Created": "2025-12-22T22:43:03.818900502Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 134904,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T22:43:03.847527913Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9a87e850a5e640dd3e5f71477885272b970ba271e3722be8bebbe0157f704ffd",
	        "ResolvConfPath": "/var/lib/docker/containers/e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c/hostname",
	        "HostsPath": "/var/lib/docker/containers/e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c/hosts",
	        "LogPath": "/var/lib/docker/containers/e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c/e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c-json.log",
	        "Name": "/functional-384766",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-384766:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-384766",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c",
	                "LowerDir": "/var/lib/docker/overlay2/3e3d10c0ae87018d46767d6a2bb62611a8b9a288f6938e75c60f3cd57119d4bf-init/diff:/var/lib/docker/overlay2/c57dd1a41102d99c4ed6be3c60b871435428bd2cea6a3d8d172f0a67527ba009/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3e3d10c0ae87018d46767d6a2bb62611a8b9a288f6938e75c60f3cd57119d4bf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3e3d10c0ae87018d46767d6a2bb62611a8b9a288f6938e75c60f3cd57119d4bf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3e3d10c0ae87018d46767d6a2bb62611a8b9a288f6938e75c60f3cd57119d4bf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-384766",
	                "Source": "/var/lib/docker/volumes/functional-384766/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-384766",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-384766",
	                "name.minikube.sigs.k8s.io": "functional-384766",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d6f65d275ad1e1cfaea153f23b0c094464e089c27de9a12387045fa2c863e00e",
	            "SandboxKey": "/var/run/docker/netns/d6f65d275ad1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-384766": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1b177601c4f3a252e4feb1553da3a4110e40d5b9ed2bd5de6789f2bc9f8f5c2b",
	                    "EndpointID": "2c787f98c5d836612c102f7592dc2eccfef09327c2a6cadf1319fd6559b5eca8",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "d6:90:04:78:9b:e3",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-384766",
	                        "e126b999cc06"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-384766 -n functional-384766
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-384766 -n functional-384766: exit status 6 (299.036727ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1222 22:51:16.570194  146268 status.go:458] kubeconfig endpoint: get endpoint: "functional-384766" does not appear in /home/jenkins/minikube-integration/22301-72233/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                                           ARGS                                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service        │ functional-580825 service hello-node --url                                                                                                                                               │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ update-context │ functional-580825 update-context --alsologtostderr -v=2                                                                                                                                  │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ update-context │ functional-580825 update-context --alsologtostderr -v=2                                                                                                                                  │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ update-context │ functional-580825 update-context --alsologtostderr -v=2                                                                                                                                  │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image          │ functional-580825 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-580825 --alsologtostderr                                                              │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image          │ functional-580825 image ls                                                                                                                                                               │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image          │ functional-580825 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-580825 --alsologtostderr                                                              │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image          │ functional-580825 image ls                                                                                                                                                               │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image          │ functional-580825 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-580825 --alsologtostderr                                                              │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image          │ functional-580825 image ls                                                                                                                                                               │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image          │ functional-580825 image save ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-580825 /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image          │ functional-580825 image rm ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-580825 --alsologtostderr                                                                         │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image          │ functional-580825 image ls                                                                                                                                                               │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image          │ functional-580825 image load /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr                                                                     │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image          │ functional-580825 image ls                                                                                                                                                               │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image          │ functional-580825 image save --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-580825 --alsologtostderr                                                              │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ ssh            │ functional-580825 ssh pgrep buildkitd                                                                                                                                                    │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │                     │
	│ image          │ functional-580825 image ls --format json --alsologtostderr                                                                                                                               │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image          │ functional-580825 image ls --format short --alsologtostderr                                                                                                                              │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │                     │
	│ image          │ functional-580825 image ls --format yaml --alsologtostderr                                                                                                                               │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image          │ functional-580825 image ls --format table --alsologtostderr                                                                                                                              │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image          │ functional-580825 image build -t localhost/my-image:functional-580825 testdata/build --alsologtostderr                                                                                   │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image          │ functional-580825 image ls                                                                                                                                                               │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ delete         │ -p functional-580825                                                                                                                                                                     │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ start          │ -p functional-384766 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-rc.1                                        │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │                     │
	└────────────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 22:42:59
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 22:42:59.842390  134327 out.go:360] Setting OutFile to fd 1 ...
	I1222 22:42:59.842986  134327 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 22:42:59.843001  134327 out.go:374] Setting ErrFile to fd 2...
	I1222 22:42:59.843007  134327 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 22:42:59.843511  134327 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-72233/.minikube/bin
	I1222 22:42:59.844291  134327 out.go:368] Setting JSON to false
	I1222 22:42:59.845173  134327 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":8720,"bootTime":1766434660,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1222 22:42:59.845246  134327 start.go:143] virtualization: kvm guest
	I1222 22:42:59.846903  134327 out.go:179] * [functional-384766] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1222 22:42:59.848336  134327 out.go:179]   - MINIKUBE_LOCATION=22301
	I1222 22:42:59.848434  134327 notify.go:221] Checking for updates...
	I1222 22:42:59.850419  134327 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 22:42:59.851606  134327 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1222 22:42:59.852679  134327 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-72233/.minikube
	I1222 22:42:59.853888  134327 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1222 22:42:59.855048  134327 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 22:42:59.856385  134327 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 22:42:59.881657  134327 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1222 22:42:59.881722  134327 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 22:42:59.935838  134327 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:true NGoroutines:44 SystemTime:2025-12-22 22:42:59.92621169 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be removed
by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1222 22:42:59.935934  134327 docker.go:319] overlay module found
	I1222 22:42:59.937690  134327 out.go:179] * Using the docker driver based on user configuration
	I1222 22:42:59.938762  134327 start.go:309] selected driver: docker
	I1222 22:42:59.938768  134327 start.go:928] validating driver "docker" against <nil>
	I1222 22:42:59.938777  134327 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 22:42:59.939672  134327 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 22:42:59.998039  134327 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:true NGoroutines:44 SystemTime:2025-12-22 22:42:59.988941385 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1222 22:42:59.998172  134327 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1222 22:42:59.998379  134327 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1222 22:43:00.000073  134327 out.go:179] * Using Docker driver with root privileges
	I1222 22:43:00.001318  134327 cni.go:84] Creating CNI manager for ""
	I1222 22:43:00.001382  134327 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1222 22:43:00.001392  134327 start_flags.go:342] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	W1222 22:43:00.001462  134327 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:34075 to docker env.
	I1222 22:43:00.001540  134327 start.go:353] cluster config:
	{Name:functional-384766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-384766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 22:43:00.002818  134327 out.go:179] * Starting "functional-384766" primary control-plane node in "functional-384766" cluster
	I1222 22:43:00.003840  134327 cache.go:134] Beginning downloading kic base image for docker with docker
	I1222 22:43:00.004860  134327 out.go:179] * Pulling base image v0.0.48-1766394456-22288 ...
	I1222 22:43:00.005907  134327 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1222 22:43:00.005930  134327 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4
	I1222 22:43:00.005942  134327 cache.go:65] Caching tarball of preloaded images
	I1222 22:43:00.006017  134327 preload.go:251] Found /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1222 22:43:00.006023  134327 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on docker
	I1222 22:43:00.006026  134327 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local docker daemon
	I1222 22:43:00.006340  134327 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/config.json ...
	I1222 22:43:00.006358  134327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/config.json: {Name:mk103cffb42129f0ed4cacda0289d1119f019236 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 22:43:00.026068  134327 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local docker daemon, skipping pull
	I1222 22:43:00.026078  134327 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 exists in daemon, skipping load
	I1222 22:43:00.026091  134327 cache.go:243] Successfully downloaded all kic artifacts
	I1222 22:43:00.026123  134327 start.go:360] acquireMachinesLock for functional-384766: {Name:mk956fe60c71d3d96aa218ecf73d6e39f6ab1bf3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 22:43:00.026210  134327 start.go:364] duration metric: took 74.871µs to acquireMachinesLock for "functional-384766"
	I1222 22:43:00.026228  134327 start.go:93] Provisioning new machine with config: &{Name:functional-384766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-384766 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1222 22:43:00.026285  134327 start.go:125] createHost starting for "" (driver="docker")
	I1222 22:43:00.028019  134327 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	W1222 22:43:00.028335  134327 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:34075 to docker env.
	I1222 22:43:00.028357  134327 start.go:159] libmachine.API.Create for "functional-384766" (driver="docker")
	I1222 22:43:00.028377  134327 client.go:173] LocalClient.Create starting
	I1222 22:43:00.028467  134327 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem
	I1222 22:43:00.028500  134327 main.go:144] libmachine: Decoding PEM data...
	I1222 22:43:00.028522  134327 main.go:144] libmachine: Parsing certificate...
	I1222 22:43:00.028625  134327 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem
	I1222 22:43:00.028651  134327 main.go:144] libmachine: Decoding PEM data...
	I1222 22:43:00.028665  134327 main.go:144] libmachine: Parsing certificate...
	I1222 22:43:00.029099  134327 cli_runner.go:164] Run: docker network inspect functional-384766 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1222 22:43:00.046212  134327 cli_runner.go:211] docker network inspect functional-384766 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1222 22:43:00.046286  134327 network_create.go:284] running [docker network inspect functional-384766] to gather additional debugging logs...
	I1222 22:43:00.046302  134327 cli_runner.go:164] Run: docker network inspect functional-384766
	W1222 22:43:00.061994  134327 cli_runner.go:211] docker network inspect functional-384766 returned with exit code 1
	I1222 22:43:00.062026  134327 network_create.go:287] error running [docker network inspect functional-384766]: docker network inspect functional-384766: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network functional-384766 not found
	I1222 22:43:00.062040  134327 network_create.go:289] output of [docker network inspect functional-384766]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network functional-384766 not found
	
	** /stderr **
	I1222 22:43:00.062138  134327 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 22:43:00.078959  134327 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c74990}
	I1222 22:43:00.078984  134327 network_create.go:124] attempt to create docker network functional-384766 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1222 22:43:00.079022  134327 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-384766 functional-384766
	I1222 22:43:00.124783  134327 network_create.go:108] docker network functional-384766 192.168.49.0/24 created
	I1222 22:43:00.124803  134327 kic.go:121] calculated static IP "192.168.49.2" for the "functional-384766" container
	I1222 22:43:00.124884  134327 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1222 22:43:00.141158  134327 cli_runner.go:164] Run: docker volume create functional-384766 --label name.minikube.sigs.k8s.io=functional-384766 --label created_by.minikube.sigs.k8s.io=true
	I1222 22:43:00.158715  134327 oci.go:103] Successfully created a docker volume functional-384766
	I1222 22:43:00.158781  134327 cli_runner.go:164] Run: docker run --rm --name functional-384766-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-384766 --entrypoint /usr/bin/test -v functional-384766:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 -d /var/lib
	I1222 22:43:00.536843  134327 oci.go:107] Successfully prepared a docker volume functional-384766
	I1222 22:43:00.536916  134327 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1222 22:43:00.536925  134327 kic.go:194] Starting extracting preloaded images to volume ...
	I1222 22:43:00.536991  134327 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v functional-384766:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 -I lz4 -xf /preloaded.tar -C /extractDir
	I1222 22:43:03.750255  134327 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v functional-384766:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 -I lz4 -xf /preloaded.tar -C /extractDir: (3.213205309s)
	I1222 22:43:03.750283  134327 kic.go:203] duration metric: took 3.213354248s to extract preloaded images to volume ...
	W1222 22:43:03.750455  134327 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1222 22:43:03.750553  134327 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1222 22:43:03.802914  134327 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-384766 --name functional-384766 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-384766 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-384766 --network functional-384766 --ip 192.168.49.2 --volume functional-384766:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484
	I1222 22:43:04.057295  134327 cli_runner.go:164] Run: docker container inspect functional-384766 --format={{.State.Running}}
	I1222 22:43:04.075762  134327 cli_runner.go:164] Run: docker container inspect functional-384766 --format={{.State.Status}}
	I1222 22:43:04.095685  134327 cli_runner.go:164] Run: docker exec functional-384766 stat /var/lib/dpkg/alternatives/iptables
	I1222 22:43:04.146231  134327 oci.go:144] the created container "functional-384766" has a running status.
	I1222 22:43:04.146269  134327 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa...
	I1222 22:43:04.254326  134327 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1222 22:43:04.278215  134327 cli_runner.go:164] Run: docker container inspect functional-384766 --format={{.State.Status}}
	I1222 22:43:04.296073  134327 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1222 22:43:04.296086  134327 kic_runner.go:114] Args: [docker exec --privileged functional-384766 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1222 22:43:04.344786  134327 cli_runner.go:164] Run: docker container inspect functional-384766 --format={{.State.Status}}
	I1222 22:43:04.363524  134327 machine.go:94] provisionDockerMachine start ...
	I1222 22:43:04.363626  134327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:43:04.381571  134327 main.go:144] libmachine: Using SSH client type: native
	I1222 22:43:04.381878  134327 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:43:04.381888  134327 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 22:43:04.382785  134327 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34832->127.0.0.1:32783: read: connection reset by peer
	I1222 22:43:07.526439  134327 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-384766
	
	I1222 22:43:07.526469  134327 ubuntu.go:182] provisioning hostname "functional-384766"
	I1222 22:43:07.526536  134327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:43:07.545948  134327 main.go:144] libmachine: Using SSH client type: native
	I1222 22:43:07.546193  134327 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:43:07.546212  134327 main.go:144] libmachine: About to run SSH command:
	sudo hostname functional-384766 && echo "functional-384766" | sudo tee /etc/hostname
	I1222 22:43:07.695866  134327 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-384766
	
	I1222 22:43:07.695924  134327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:43:07.712544  134327 main.go:144] libmachine: Using SSH client type: native
	I1222 22:43:07.712852  134327 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:43:07.712871  134327 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-384766' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-384766/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-384766' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 22:43:07.854099  134327 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 22:43:07.854127  134327 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22301-72233/.minikube CaCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22301-72233/.minikube}
	I1222 22:43:07.854172  134327 ubuntu.go:190] setting up certificates
	I1222 22:43:07.854186  134327 provision.go:84] configureAuth start
	I1222 22:43:07.854262  134327 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-384766
	I1222 22:43:07.871910  134327 provision.go:143] copyHostCerts
	I1222 22:43:07.871972  134327 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem, removing ...
	I1222 22:43:07.871980  134327 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem
	I1222 22:43:07.872058  134327 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem (1679 bytes)
	I1222 22:43:07.872160  134327 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem, removing ...
	I1222 22:43:07.872165  134327 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem
	I1222 22:43:07.872195  134327 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem (1082 bytes)
	I1222 22:43:07.872271  134327 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem, removing ...
	I1222 22:43:07.872275  134327 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem
	I1222 22:43:07.872301  134327 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem (1123 bytes)
	I1222 22:43:07.872409  134327 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem org=jenkins.functional-384766 san=[127.0.0.1 192.168.49.2 functional-384766 localhost minikube]
	I1222 22:43:08.036521  134327 provision.go:177] copyRemoteCerts
	I1222 22:43:08.036586  134327 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 22:43:08.036675  134327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:43:08.054544  134327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
	I1222 22:43:08.155832  134327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 22:43:08.175219  134327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 22:43:08.192227  134327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1222 22:43:08.209514  134327 provision.go:87] duration metric: took 355.315794ms to configureAuth
	I1222 22:43:08.209533  134327 ubuntu.go:206] setting minikube options for container-runtime
	I1222 22:43:08.209748  134327 config.go:182] Loaded profile config "functional-384766": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1222 22:43:08.209796  134327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:43:08.227145  134327 main.go:144] libmachine: Using SSH client type: native
	I1222 22:43:08.227353  134327 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:43:08.227358  134327 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1222 22:43:08.369310  134327 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1222 22:43:08.369322  134327 ubuntu.go:71] root file system type: overlay
	I1222 22:43:08.369445  134327 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1222 22:43:08.369495  134327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:43:08.386966  134327 main.go:144] libmachine: Using SSH client type: native
	I1222 22:43:08.387189  134327 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:43:08.387245  134327 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1222 22:43:08.539683  134327 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1222 22:43:08.539748  134327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:43:08.558294  134327 main.go:144] libmachine: Using SSH client type: native
	I1222 22:43:08.558506  134327 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:43:08.558520  134327 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1222 22:43:09.658984  134327 main.go:144] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-12 14:48:15.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-22 22:43:08.537193298 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1222 22:43:09.659006  134327 machine.go:97] duration metric: took 5.295469546s to provisionDockerMachine
	I1222 22:43:09.659016  134327 client.go:176] duration metric: took 9.630634498s to LocalClient.Create
	I1222 22:43:09.659035  134327 start.go:167] duration metric: took 9.630678208s to libmachine.API.Create "functional-384766"
	I1222 22:43:09.659043  134327 start.go:293] postStartSetup for "functional-384766" (driver="docker")
	I1222 22:43:09.659056  134327 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 22:43:09.659106  134327 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 22:43:09.659184  134327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:43:09.676234  134327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
	I1222 22:43:09.779722  134327 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 22:43:09.783227  134327 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 22:43:09.783244  134327 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 22:43:09.783255  134327 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-72233/.minikube/addons for local assets ...
	I1222 22:43:09.783329  134327 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-72233/.minikube/files for local assets ...
	I1222 22:43:09.783425  134327 filesync.go:149] local asset: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem -> 758032.pem in /etc/ssl/certs
	I1222 22:43:09.783521  134327 filesync.go:149] local asset: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/test/nested/copy/75803/hosts -> hosts in /etc/test/nested/copy/75803
	I1222 22:43:09.783569  134327 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/75803
	I1222 22:43:09.791324  134327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem --> /etc/ssl/certs/758032.pem (1708 bytes)
	I1222 22:43:09.810812  134327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/test/nested/copy/75803/hosts --> /etc/test/nested/copy/75803/hosts (40 bytes)
	I1222 22:43:09.827873  134327 start.go:296] duration metric: took 168.81655ms for postStartSetup
	I1222 22:43:09.828226  134327 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-384766
	I1222 22:43:09.845546  134327 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/config.json ...
	I1222 22:43:09.845802  134327 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 22:43:09.845857  134327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:43:09.862738  134327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
	I1222 22:43:09.961157  134327 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 22:43:09.966010  134327 start.go:128] duration metric: took 9.939709767s to createHost
	I1222 22:43:09.966047  134327 start.go:83] releasing machines lock for "functional-384766", held for 9.939811396s
	I1222 22:43:09.966125  134327 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-384766
	I1222 22:43:09.984820  134327 out.go:179] * Found network options:
	I1222 22:43:09.986317  134327 out.go:179]   - HTTP_PROXY=localhost:34075
	W1222 22:43:09.987447  134327 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	I1222 22:43:09.988617  134327 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	I1222 22:43:09.989650  134327 ssh_runner.go:195] Run: cat /version.json
	I1222 22:43:09.989683  134327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:43:09.989748  134327 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 22:43:09.989803  134327 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:43:10.008991  134327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
	I1222 22:43:10.009302  134327 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
	I1222 22:43:10.163176  134327 ssh_runner.go:195] Run: systemctl --version
	I1222 22:43:10.169754  134327 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 22:43:10.174094  134327 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 22:43:10.174136  134327 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 22:43:10.197824  134327 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1222 22:43:10.197845  134327 start.go:496] detecting cgroup driver to use...
	I1222 22:43:10.197875  134327 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 22:43:10.197983  134327 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 22:43:10.211554  134327 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1222 22:43:10.221448  134327 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1222 22:43:10.229443  134327 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1222 22:43:10.229490  134327 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1222 22:43:10.237838  134327 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 22:43:10.245745  134327 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1222 22:43:10.253441  134327 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 22:43:10.261191  134327 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 22:43:10.268514  134327 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1222 22:43:10.276423  134327 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1222 22:43:10.284385  134327 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1222 22:43:10.292740  134327 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 22:43:10.299472  134327 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 22:43:10.306266  134327 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 22:43:10.387115  134327 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1222 22:43:10.458229  134327 start.go:496] detecting cgroup driver to use...
	I1222 22:43:10.458267  134327 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 22:43:10.458329  134327 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1222 22:43:10.471348  134327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 22:43:10.482805  134327 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1222 22:43:10.502905  134327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 22:43:10.514464  134327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1222 22:43:10.525950  134327 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 22:43:10.539182  134327 ssh_runner.go:195] Run: which cri-dockerd
	I1222 22:43:10.542645  134327 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1222 22:43:10.551212  134327 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1222 22:43:10.563822  134327 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1222 22:43:10.643708  134327 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1222 22:43:10.722977  134327 docker.go:578] configuring docker to use "cgroupfs" as cgroup driver...
	I1222 22:43:10.723079  134327 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1222 22:43:10.736218  134327 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1222 22:43:10.747889  134327 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 22:43:10.822668  134327 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1222 22:43:11.487017  134327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 22:43:11.499323  134327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1222 22:43:11.511496  134327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1222 22:43:11.523269  134327 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1222 22:43:11.606371  134327 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1222 22:43:11.686203  134327 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 22:43:11.776030  134327 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1222 22:43:11.801947  134327 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1222 22:43:11.813698  134327 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 22:43:11.896166  134327 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1222 22:43:11.964841  134327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1222 22:43:11.977925  134327 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1222 22:43:11.977977  134327 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1222 22:43:11.981749  134327 start.go:564] Will wait 60s for crictl version
	I1222 22:43:11.981790  134327 ssh_runner.go:195] Run: which crictl
	I1222 22:43:11.985190  134327 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 22:43:12.010044  134327 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1222 22:43:12.010089  134327 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1222 22:43:12.033700  134327 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1222 22:43:12.058968  134327 out.go:252] * Preparing Kubernetes v1.35.0-rc.1 on Docker 29.1.3 ...
	I1222 22:43:12.059036  134327 cli_runner.go:164] Run: docker network inspect functional-384766 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 22:43:12.075916  134327 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1222 22:43:12.079882  134327 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 22:43:12.090109  134327 kubeadm.go:884] updating cluster {Name:functional-384766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-384766 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1222 22:43:12.090209  134327 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1222 22:43:12.090247  134327 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1222 22:43:12.110567  134327 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1222 22:43:12.110581  134327 docker.go:624] Images already preloaded, skipping extraction
	I1222 22:43:12.110652  134327 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1222 22:43:12.130217  134327 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1222 22:43:12.130234  134327 cache_images.go:86] Images are preloaded, skipping loading
	I1222 22:43:12.130243  134327 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 docker true true} ...
	I1222 22:43:12.130358  134327 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-384766 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-384766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 22:43:12.130419  134327 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1222 22:43:12.179367  134327 cni.go:84] Creating CNI manager for ""
	I1222 22:43:12.179382  134327 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1222 22:43:12.179401  134327 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 22:43:12.179423  134327 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-384766 NodeName:functional-384766 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 22:43:12.179554  134327 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-384766"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 22:43:12.179667  134327 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 22:43:12.187643  134327 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 22:43:12.187687  134327 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 22:43:12.195094  134327 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1222 22:43:12.206812  134327 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 22:43:12.218731  134327 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I1222 22:43:12.230618  134327 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1222 22:43:12.233995  134327 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 22:43:12.243311  134327 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 22:43:12.320819  134327 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 22:43:12.347752  134327 certs.go:69] Setting up /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766 for IP: 192.168.49.2
	I1222 22:43:12.347766  134327 certs.go:195] generating shared ca certs ...
	I1222 22:43:12.347785  134327 certs.go:227] acquiring lock for ca certs: {Name:mk952cc8302daab7c0050aedd5db4002f6808128 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 22:43:12.347952  134327 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.key
	I1222 22:43:12.347991  134327 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.key
	I1222 22:43:12.347998  134327 certs.go:257] generating profile certs ...
	I1222 22:43:12.348050  134327 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.key
	I1222 22:43:12.348060  134327 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.crt with IP's: []
	I1222 22:43:12.393869  134327 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.crt ...
	I1222 22:43:12.393886  134327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.crt: {Name:mk530e071aad18b3134693c324f9b1dfed234a51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 22:43:12.394072  134327 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.key ...
	I1222 22:43:12.394079  134327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.key: {Name:mk36ff40081dccdbc3e28dcc99a9f01fe02f823a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 22:43:12.394157  134327 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/apiserver.key.c9e079a8
	I1222 22:43:12.394167  134327 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/apiserver.crt.c9e079a8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1222 22:43:12.495430  134327 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/apiserver.crt.c9e079a8 ...
	I1222 22:43:12.495447  134327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/apiserver.crt.c9e079a8: {Name:mk991fe04215d5a48dff19d83b332257e2a9b977 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 22:43:12.495606  134327 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/apiserver.key.c9e079a8 ...
	I1222 22:43:12.495614  134327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/apiserver.key.c9e079a8: {Name:mk6f551b6f6ed8853a3ffaf4212c2ec6a1212fac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 22:43:12.495688  134327 certs.go:382] copying /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/apiserver.crt.c9e079a8 -> /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/apiserver.crt
	I1222 22:43:12.495781  134327 certs.go:386] copying /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/apiserver.key.c9e079a8 -> /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/apiserver.key
	I1222 22:43:12.495840  134327 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/proxy-client.key
	I1222 22:43:12.495851  134327 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/proxy-client.crt with IP's: []
	I1222 22:43:12.599432  134327 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/proxy-client.crt ...
	I1222 22:43:12.599449  134327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/proxy-client.crt: {Name:mka7f526831d42cc26b69e542f287f3ffd5c2994 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 22:43:12.599660  134327 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/proxy-client.key ...
	I1222 22:43:12.599671  134327 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/proxy-client.key: {Name:mke420499a15db0f79a1ec68ba87e862a7601a77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 22:43:12.599860  134327 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803.pem (1338 bytes)
	W1222 22:43:12.599896  134327 certs.go:480] ignoring /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803_empty.pem, impossibly tiny 0 bytes
	I1222 22:43:12.599903  134327 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem (1675 bytes)
	I1222 22:43:12.599927  134327 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem (1082 bytes)
	I1222 22:43:12.599947  134327 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem (1123 bytes)
	I1222 22:43:12.599966  134327 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem (1679 bytes)
	I1222 22:43:12.600002  134327 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem (1708 bytes)
	I1222 22:43:12.600587  134327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 22:43:12.618629  134327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 22:43:12.635948  134327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 22:43:12.652387  134327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 22:43:12.669240  134327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 22:43:12.685719  134327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 22:43:12.702239  134327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 22:43:12.718406  134327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1222 22:43:12.735111  134327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803.pem --> /usr/share/ca-certificates/75803.pem (1338 bytes)
	I1222 22:43:12.754419  134327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem --> /usr/share/ca-certificates/758032.pem (1708 bytes)
	I1222 22:43:12.771231  134327 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 22:43:12.788411  134327 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 22:43:12.800191  134327 ssh_runner.go:195] Run: openssl version
	I1222 22:43:12.806073  134327 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/758032.pem
	I1222 22:43:12.812870  134327 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/758032.pem /etc/ssl/certs/758032.pem
	I1222 22:43:12.819841  134327 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/758032.pem
	I1222 22:43:12.823139  134327 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 22:42 /usr/share/ca-certificates/758032.pem
	I1222 22:43:12.823180  134327 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/758032.pem
	I1222 22:43:12.856461  134327 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 22:43:12.863976  134327 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/758032.pem /etc/ssl/certs/3ec20f2e.0
	I1222 22:43:12.870984  134327 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 22:43:12.877801  134327 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 22:43:12.884620  134327 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 22:43:12.888032  134327 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 22:33 /usr/share/ca-certificates/minikubeCA.pem
	I1222 22:43:12.888070  134327 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 22:43:12.921031  134327 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 22:43:12.928344  134327 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1222 22:43:12.935237  134327 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/75803.pem
	I1222 22:43:12.942135  134327 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/75803.pem /etc/ssl/certs/75803.pem
	I1222 22:43:12.949033  134327 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75803.pem
	I1222 22:43:12.952633  134327 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 22:42 /usr/share/ca-certificates/75803.pem
	I1222 22:43:12.952666  134327 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75803.pem
	I1222 22:43:12.986137  134327 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 22:43:12.993388  134327 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/75803.pem /etc/ssl/certs/51391683.0
	I1222 22:43:13.000443  134327 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 22:43:13.003773  134327 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1222 22:43:13.003823  134327 kubeadm.go:401] StartCluster: {Name:functional-384766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-384766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 22:43:13.003935  134327 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1222 22:43:13.022246  134327 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 22:43:13.029925  134327 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 22:43:13.037260  134327 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 22:43:13.037300  134327 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 22:43:13.044335  134327 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 22:43:13.044343  134327 kubeadm.go:158] found existing configuration files:
	
	I1222 22:43:13.044373  134327 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1222 22:43:13.051253  134327 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 22:43:13.051300  134327 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 22:43:13.058200  134327 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1222 22:43:13.065361  134327 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 22:43:13.065396  134327 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 22:43:13.072996  134327 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1222 22:43:13.080540  134327 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 22:43:13.080578  134327 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 22:43:13.087954  134327 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1222 22:43:13.095464  134327 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 22:43:13.095501  134327 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 22:43:13.102677  134327 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 22:43:13.216388  134327 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1222 22:43:13.216952  134327 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 22:43:13.273527  134327 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 22:47:14.702958  134327 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1222 22:47:14.703001  134327 kubeadm.go:319] 
	I1222 22:47:14.703173  134327 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1222 22:47:14.706122  134327 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 22:47:14.706161  134327 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 22:47:14.706254  134327 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 22:47:14.706333  134327 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1222 22:47:14.706366  134327 kubeadm.go:319] OS: Linux
	I1222 22:47:14.706416  134327 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 22:47:14.706466  134327 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 22:47:14.706505  134327 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 22:47:14.706549  134327 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 22:47:14.706609  134327 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 22:47:14.706661  134327 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 22:47:14.706722  134327 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 22:47:14.706770  134327 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 22:47:14.706806  134327 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 22:47:14.706878  134327 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 22:47:14.706958  134327 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 22:47:14.707046  134327 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 22:47:14.707099  134327 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 22:47:14.708921  134327 out.go:252]   - Generating certificates and keys ...
	I1222 22:47:14.709006  134327 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 22:47:14.709096  134327 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 22:47:14.709185  134327 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1222 22:47:14.709241  134327 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1222 22:47:14.709313  134327 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1222 22:47:14.709355  134327 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1222 22:47:14.709397  134327 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1222 22:47:14.709500  134327 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-384766 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1222 22:47:14.709542  134327 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1222 22:47:14.709652  134327 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-384766 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1222 22:47:14.709700  134327 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1222 22:47:14.709758  134327 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1222 22:47:14.709794  134327 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1222 22:47:14.709843  134327 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 22:47:14.709884  134327 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 22:47:14.709957  134327 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 22:47:14.710042  134327 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 22:47:14.710137  134327 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 22:47:14.710192  134327 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 22:47:14.710259  134327 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 22:47:14.710321  134327 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 22:47:14.711693  134327 out.go:252]   - Booting up control plane ...
	I1222 22:47:14.711763  134327 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 22:47:14.711832  134327 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 22:47:14.711885  134327 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 22:47:14.711969  134327 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 22:47:14.712050  134327 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 22:47:14.712137  134327 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 22:47:14.712210  134327 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 22:47:14.712241  134327 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 22:47:14.712365  134327 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 22:47:14.712468  134327 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 22:47:14.712537  134327 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001109614s
	I1222 22:47:14.712542  134327 kubeadm.go:319] 
	I1222 22:47:14.712602  134327 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 22:47:14.712629  134327 kubeadm.go:319] 	- The kubelet is not running
	I1222 22:47:14.712711  134327 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 22:47:14.712713  134327 kubeadm.go:319] 
	I1222 22:47:14.712833  134327 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 22:47:14.712875  134327 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 22:47:14.712911  134327 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 22:47:14.712955  134327 kubeadm.go:319] 
	W1222 22:47:14.713125  134327 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-384766 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-384766 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001109614s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1222 22:47:14.713237  134327 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1222 22:47:15.126260  134327 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 22:47:15.138517  134327 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 22:47:15.138583  134327 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 22:47:15.146236  134327 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 22:47:15.146247  134327 kubeadm.go:158] found existing configuration files:
	
	I1222 22:47:15.146286  134327 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1222 22:47:15.153519  134327 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 22:47:15.153558  134327 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 22:47:15.160666  134327 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1222 22:47:15.167756  134327 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 22:47:15.167801  134327 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 22:47:15.174775  134327 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1222 22:47:15.181677  134327 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 22:47:15.181718  134327 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 22:47:15.188520  134327 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1222 22:47:15.195495  134327 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 22:47:15.195527  134327 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 22:47:15.202385  134327 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 22:47:15.304277  134327 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1222 22:47:15.304802  134327 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 22:47:15.360296  134327 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 22:51:15.873250  134327 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1222 22:51:15.873301  134327 kubeadm.go:319] 
	I1222 22:51:15.873379  134327 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1222 22:51:15.876563  134327 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 22:51:15.876648  134327 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 22:51:15.876741  134327 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 22:51:15.876789  134327 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1222 22:51:15.876817  134327 kubeadm.go:319] OS: Linux
	I1222 22:51:15.876860  134327 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 22:51:15.876898  134327 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 22:51:15.876939  134327 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 22:51:15.876980  134327 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 22:51:15.877051  134327 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 22:51:15.877112  134327 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 22:51:15.877162  134327 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 22:51:15.877207  134327 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 22:51:15.877279  134327 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 22:51:15.877385  134327 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 22:51:15.877523  134327 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 22:51:15.877645  134327 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 22:51:15.877709  134327 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 22:51:15.879470  134327 out.go:252]   - Generating certificates and keys ...
	I1222 22:51:15.879536  134327 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 22:51:15.879616  134327 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 22:51:15.879730  134327 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1222 22:51:15.879815  134327 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1222 22:51:15.879907  134327 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1222 22:51:15.879981  134327 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1222 22:51:15.880071  134327 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1222 22:51:15.880156  134327 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1222 22:51:15.880222  134327 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1222 22:51:15.880287  134327 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1222 22:51:15.880316  134327 kubeadm.go:319] [certs] Using the existing "sa" key
	I1222 22:51:15.880365  134327 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 22:51:15.880409  134327 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 22:51:15.880454  134327 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 22:51:15.880500  134327 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 22:51:15.880550  134327 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 22:51:15.880645  134327 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 22:51:15.880761  134327 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 22:51:15.880828  134327 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 22:51:15.882035  134327 out.go:252]   - Booting up control plane ...
	I1222 22:51:15.882102  134327 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 22:51:15.882164  134327 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 22:51:15.882233  134327 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 22:51:15.882343  134327 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 22:51:15.882437  134327 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 22:51:15.882520  134327 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 22:51:15.882583  134327 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 22:51:15.882649  134327 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 22:51:15.882790  134327 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 22:51:15.882922  134327 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 22:51:15.882978  134327 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000802765s
	I1222 22:51:15.882981  134327 kubeadm.go:319] 
	I1222 22:51:15.883028  134327 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 22:51:15.883054  134327 kubeadm.go:319] 	- The kubelet is not running
	I1222 22:51:15.883146  134327 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 22:51:15.883150  134327 kubeadm.go:319] 
	I1222 22:51:15.883240  134327 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 22:51:15.883272  134327 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 22:51:15.883297  134327 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 22:51:15.883327  134327 kubeadm.go:319] 
	I1222 22:51:15.883374  134327 kubeadm.go:403] duration metric: took 8m2.879554887s to StartCluster
	I1222 22:51:15.883454  134327 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 22:51:15.883515  134327 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 22:51:15.919830  134327 cri.go:96] found id: ""
	I1222 22:51:15.919859  134327 logs.go:282] 0 containers: []
	W1222 22:51:15.919870  134327 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:51:15.919878  134327 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 22:51:15.919930  134327 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 22:51:15.944698  134327 cri.go:96] found id: ""
	I1222 22:51:15.944716  134327 logs.go:282] 0 containers: []
	W1222 22:51:15.944725  134327 logs.go:284] No container was found matching "etcd"
	I1222 22:51:15.944760  134327 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 22:51:15.944815  134327 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 22:51:15.969109  134327 cri.go:96] found id: ""
	I1222 22:51:15.969124  134327 logs.go:282] 0 containers: []
	W1222 22:51:15.969131  134327 logs.go:284] No container was found matching "coredns"
	I1222 22:51:15.969136  134327 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 22:51:15.969180  134327 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 22:51:15.993741  134327 cri.go:96] found id: ""
	I1222 22:51:15.993756  134327 logs.go:282] 0 containers: []
	W1222 22:51:15.993762  134327 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:51:15.993770  134327 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 22:51:15.993825  134327 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 22:51:16.022352  134327 cri.go:96] found id: ""
	I1222 22:51:16.022377  134327 logs.go:282] 0 containers: []
	W1222 22:51:16.022387  134327 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:51:16.022394  134327 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 22:51:16.022453  134327 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 22:51:16.051944  134327 cri.go:96] found id: ""
	I1222 22:51:16.051963  134327 logs.go:282] 0 containers: []
	W1222 22:51:16.051973  134327 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:51:16.051980  134327 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 22:51:16.052030  134327 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 22:51:16.079795  134327 cri.go:96] found id: ""
	I1222 22:51:16.079810  134327 logs.go:282] 0 containers: []
	W1222 22:51:16.079817  134327 logs.go:284] No container was found matching "kindnet"
	I1222 22:51:16.079834  134327 logs.go:123] Gathering logs for kubelet ...
	I1222 22:51:16.079844  134327 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:51:16.127554  134327 logs.go:123] Gathering logs for dmesg ...
	I1222 22:51:16.127582  134327 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:51:16.143319  134327 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:51:16.143338  134327 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:51:16.199784  134327 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:51:16.192528    9298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:51:16.193128    9298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:51:16.194732    9298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:51:16.195115    9298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:51:16.196676    9298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:51:16.192528    9298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:51:16.193128    9298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:51:16.194732    9298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:51:16.195115    9298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:51:16.196676    9298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:51:16.199815  134327 logs.go:123] Gathering logs for Docker ...
	I1222 22:51:16.199829  134327 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:51:16.221641  134327 logs.go:123] Gathering logs for container status ...
	I1222 22:51:16.221662  134327 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 22:51:16.249910  134327 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000802765s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1222 22:51:16.249952  134327 out.go:285] * 
	W1222 22:51:16.250032  134327 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000802765s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 22:51:16.250052  134327 out.go:285] * 
	W1222 22:51:16.250321  134327 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 22:51:16.253724  134327 out.go:203] 
	W1222 22:51:16.254723  134327 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000802765s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 22:51:16.254765  134327 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1222 22:51:16.254780  134327 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1222 22:51:16.256500  134327 out.go:203] 
	
	
	==> Docker <==
	Dec 22 22:43:10 functional-384766 dockerd[1192]: time="2025-12-22T22:43:10.968454608Z" level=info msg="Restoring containers: start."
	Dec 22 22:43:10 functional-384766 dockerd[1192]: time="2025-12-22T22:43:10.982234726Z" level=info msg="Deleting nftables IPv4 rules" error="exit status 1"
	Dec 22 22:43:10 functional-384766 dockerd[1192]: time="2025-12-22T22:43:10.993188038Z" level=info msg="Deleting nftables IPv6 rules" error="exit status 1"
	Dec 22 22:43:11 functional-384766 dockerd[1192]: time="2025-12-22T22:43:11.453024800Z" level=info msg="Loading containers: done."
	Dec 22 22:43:11 functional-384766 dockerd[1192]: time="2025-12-22T22:43:11.463301670Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 22 22:43:11 functional-384766 dockerd[1192]: time="2025-12-22T22:43:11.463344433Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 22 22:43:11 functional-384766 dockerd[1192]: time="2025-12-22T22:43:11.463376191Z" level=info msg="Initializing buildkit"
	Dec 22 22:43:11 functional-384766 dockerd[1192]: time="2025-12-22T22:43:11.480778445Z" level=info msg="Completed buildkit initialization"
	Dec 22 22:43:11 functional-384766 dockerd[1192]: time="2025-12-22T22:43:11.484945277Z" level=info msg="Daemon has completed initialization"
	Dec 22 22:43:11 functional-384766 dockerd[1192]: time="2025-12-22T22:43:11.485013905Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 22 22:43:11 functional-384766 dockerd[1192]: time="2025-12-22T22:43:11.485049657Z" level=info msg="API listen on /run/docker.sock"
	Dec 22 22:43:11 functional-384766 dockerd[1192]: time="2025-12-22T22:43:11.485052110Z" level=info msg="API listen on [::]:2376"
	Dec 22 22:43:11 functional-384766 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 22 22:43:11 functional-384766 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 22 22:43:11 functional-384766 cri-dockerd[1482]: time="2025-12-22T22:43:11Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 22 22:43:11 functional-384766 cri-dockerd[1482]: time="2025-12-22T22:43:11Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 22 22:43:11 functional-384766 cri-dockerd[1482]: time="2025-12-22T22:43:11Z" level=info msg="Start docker client with request timeout 0s"
	Dec 22 22:43:11 functional-384766 cri-dockerd[1482]: time="2025-12-22T22:43:11Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 22 22:43:11 functional-384766 cri-dockerd[1482]: time="2025-12-22T22:43:11Z" level=info msg="Loaded network plugin cni"
	Dec 22 22:43:11 functional-384766 cri-dockerd[1482]: time="2025-12-22T22:43:11Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 22 22:43:11 functional-384766 cri-dockerd[1482]: time="2025-12-22T22:43:11Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 22 22:43:11 functional-384766 cri-dockerd[1482]: time="2025-12-22T22:43:11Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 22 22:43:11 functional-384766 cri-dockerd[1482]: time="2025-12-22T22:43:11Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 22 22:43:11 functional-384766 cri-dockerd[1482]: time="2025-12-22T22:43:11Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 22 22:43:11 functional-384766 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:51:17.095733    9455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:51:17.096227    9455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:51:17.097809    9455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:51:17.098172    9455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:51:17.099645    9455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff da 9e 7f a3 27 cb 08 06
	[  +0.239045] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6e eb f7 fd 0a 48 08 06
	[  +0.170967] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 16 5a dc 65 fc cc 08 06
	[Dec22 22:37] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 66 cb ee 90 55 2b 08 06
	[  +0.000450] IPv4: martian source 10.244.0.32 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff be 43 50 0c dd 15 08 06
	[  +0.000658] IPv4: martian source 10.244.0.32 from 10.244.0.7, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e 41 3c 76 23 2b 08 06
	[  +1.709294] IPv4: martian source 10.244.0.31 from 10.244.0.26, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff be b6 30 85 5f 4e 08 06
	[  +0.532867] IPv4: martian source 10.244.0.26 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff be 43 50 0c dd 15 08 06
	[Dec22 22:39] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 46 b7 49 09 f9 e0 08 06
	[  +0.006417] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1e e5 c5 4f 67 2b 08 06
	[Dec22 22:40] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 22 2e 10 70 70 25 08 06
	[Dec22 22:41] IPv4: martian source 10.244.0.1 from 10.244.0.6, on dev eth0
	[  +0.000034] ll header: 00000000: ff ff ff ff ff ff ee d7 ae 32 ba c5 08 06
	[Dec22 22:42] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 95 cb 2f 8e 91 08 06
	
	
	==> kernel <==
	 22:51:17 up  2:33,  0 user,  load average: 0.01, 0.23, 0.71
	Linux functional-384766 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 22:51:13 functional-384766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 22:51:14 functional-384766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
	Dec 22 22:51:14 functional-384766 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 22:51:14 functional-384766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 22:51:14 functional-384766 kubelet[9167]: E1222 22:51:14.535514    9167 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 22:51:14 functional-384766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 22:51:14 functional-384766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 22:51:15 functional-384766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 22 22:51:15 functional-384766 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 22:51:15 functional-384766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 22:51:15 functional-384766 kubelet[9179]: E1222 22:51:15.286791    9179 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 22:51:15 functional-384766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 22:51:15 functional-384766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 22:51:15 functional-384766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 22 22:51:15 functional-384766 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 22:51:15 functional-384766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 22:51:16 functional-384766 kubelet[9243]: E1222 22:51:16.039425    9243 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 22:51:16 functional-384766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 22:51:16 functional-384766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 22:51:16 functional-384766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 22 22:51:16 functional-384766 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 22:51:16 functional-384766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 22:51:16 functional-384766 kubelet[9340]: E1222 22:51:16.785283    9340 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 22:51:16 functional-384766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 22:51:16 functional-384766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-384766 -n functional-384766
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-384766 -n functional-384766: exit status 6 (313.513443ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1222 22:51:17.496915  146607 status.go:458] kubeconfig endpoint: get endpoint: "functional-384766" does not appear in /home/jenkins/minikube-integration/22301-72233/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "functional-384766" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy (497.72s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart (367.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart
I1222 22:51:17.513093   75803 config.go:182] Loaded profile config "functional-384766": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-384766 --alsologtostderr -v=8
E1222 22:51:30.660126   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-580825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 22:51:58.351867   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-580825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 22:55:31.033685   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/addons-268945/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 22:56:30.659791   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-580825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 22:56:54.087415   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/addons-268945/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-384766 --alsologtostderr -v=8: exit status 80 (6m5.249735134s)

                                                
                                                
-- stdout --
	* [functional-384766] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22301
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22301-72233/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-72233/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-384766" primary control-plane node in "functional-384766" cluster
	* Pulling base image v0.0.48-1766394456-22288 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1222 22:51:17.565426  146734 out.go:360] Setting OutFile to fd 1 ...
	I1222 22:51:17.565716  146734 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 22:51:17.565727  146734 out.go:374] Setting ErrFile to fd 2...
	I1222 22:51:17.565732  146734 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 22:51:17.565972  146734 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-72233/.minikube/bin
	I1222 22:51:17.566463  146734 out.go:368] Setting JSON to false
	I1222 22:51:17.567434  146734 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":9218,"bootTime":1766434660,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1222 22:51:17.567486  146734 start.go:143] virtualization: kvm guest
	I1222 22:51:17.569465  146734 out.go:179] * [functional-384766] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1222 22:51:17.570460  146734 out.go:179]   - MINIKUBE_LOCATION=22301
	I1222 22:51:17.570465  146734 notify.go:221] Checking for updates...
	I1222 22:51:17.572456  146734 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 22:51:17.573608  146734 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1222 22:51:17.574791  146734 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-72233/.minikube
	I1222 22:51:17.575840  146734 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1222 22:51:17.576824  146734 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 22:51:17.578279  146734 config.go:182] Loaded profile config "functional-384766": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1222 22:51:17.578404  146734 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 22:51:17.602058  146734 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1222 22:51:17.602223  146734 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 22:51:17.652786  146734 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-12-22 22:51:17.644025132 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1222 22:51:17.652901  146734 docker.go:319] overlay module found
	I1222 22:51:17.655127  146734 out.go:179] * Using the docker driver based on existing profile
	I1222 22:51:17.656150  146734 start.go:309] selected driver: docker
	I1222 22:51:17.656165  146734 start.go:928] validating driver "docker" against &{Name:functional-384766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-384766 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 22:51:17.656249  146734 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 22:51:17.656337  146734 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 22:51:17.716062  146734 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-12-22 22:51:17.707507925 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1222 22:51:17.716919  146734 cni.go:84] Creating CNI manager for ""
	I1222 22:51:17.717012  146734 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1222 22:51:17.717085  146734 start.go:353] cluster config:
	{Name:functional-384766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-384766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 22:51:17.719515  146734 out.go:179] * Starting "functional-384766" primary control-plane node in "functional-384766" cluster
	I1222 22:51:17.720631  146734 cache.go:134] Beginning downloading kic base image for docker with docker
	I1222 22:51:17.721792  146734 out.go:179] * Pulling base image v0.0.48-1766394456-22288 ...
	I1222 22:51:17.723064  146734 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1222 22:51:17.723095  146734 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4
	I1222 22:51:17.723112  146734 cache.go:65] Caching tarball of preloaded images
	I1222 22:51:17.723172  146734 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local docker daemon
	I1222 22:51:17.723191  146734 preload.go:251] Found /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1222 22:51:17.723198  146734 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on docker
	I1222 22:51:17.723299  146734 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/config.json ...
	I1222 22:51:17.742349  146734 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local docker daemon, skipping pull
	I1222 22:51:17.742368  146734 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 exists in daemon, skipping load
	I1222 22:51:17.742396  146734 cache.go:243] Successfully downloaded all kic artifacts
	I1222 22:51:17.742444  146734 start.go:360] acquireMachinesLock for functional-384766: {Name:mk956fe60c71d3d96aa218ecf73d6e39f6ab1bf3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 22:51:17.742506  146734 start.go:364] duration metric: took 41.881µs to acquireMachinesLock for "functional-384766"
	I1222 22:51:17.742535  146734 start.go:96] Skipping create...Using existing machine configuration
	I1222 22:51:17.742545  146734 fix.go:54] fixHost starting: 
	I1222 22:51:17.742810  146734 cli_runner.go:164] Run: docker container inspect functional-384766 --format={{.State.Status}}
	I1222 22:51:17.759507  146734 fix.go:112] recreateIfNeeded on functional-384766: state=Running err=<nil>
	W1222 22:51:17.759531  146734 fix.go:138] unexpected machine state, will restart: <nil>
	I1222 22:51:17.761090  146734 out.go:252] * Updating the running docker "functional-384766" container ...
	I1222 22:51:17.761123  146734 machine.go:94] provisionDockerMachine start ...
	I1222 22:51:17.761180  146734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:51:17.778682  146734 main.go:144] libmachine: Using SSH client type: native
	I1222 22:51:17.778900  146734 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:51:17.778912  146734 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 22:51:17.919326  146734 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-384766
	
	I1222 22:51:17.919369  146734 ubuntu.go:182] provisioning hostname "functional-384766"
	I1222 22:51:17.919431  146734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:51:17.936992  146734 main.go:144] libmachine: Using SSH client type: native
	I1222 22:51:17.937221  146734 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:51:17.937234  146734 main.go:144] libmachine: About to run SSH command:
	sudo hostname functional-384766 && echo "functional-384766" | sudo tee /etc/hostname
	I1222 22:51:18.086470  146734 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-384766
	
	I1222 22:51:18.086564  146734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:51:18.104748  146734 main.go:144] libmachine: Using SSH client type: native
	I1222 22:51:18.105051  146734 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:51:18.105077  146734 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-384766' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-384766/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-384766' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 22:51:18.246730  146734 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 22:51:18.246760  146734 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22301-72233/.minikube CaCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22301-72233/.minikube}
	I1222 22:51:18.246782  146734 ubuntu.go:190] setting up certificates
	I1222 22:51:18.246792  146734 provision.go:84] configureAuth start
	I1222 22:51:18.246854  146734 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-384766
	I1222 22:51:18.265782  146734 provision.go:143] copyHostCerts
	I1222 22:51:18.265828  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem
	I1222 22:51:18.265879  146734 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem, removing ...
	I1222 22:51:18.265900  146734 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem
	I1222 22:51:18.266005  146734 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem (1082 bytes)
	I1222 22:51:18.266139  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem
	I1222 22:51:18.266163  146734 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem, removing ...
	I1222 22:51:18.266175  146734 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem
	I1222 22:51:18.266220  146734 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem (1123 bytes)
	I1222 22:51:18.266317  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem
	I1222 22:51:18.266344  146734 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem, removing ...
	I1222 22:51:18.266355  146734 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem
	I1222 22:51:18.266400  146734 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem (1679 bytes)
	I1222 22:51:18.266499  146734 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem org=jenkins.functional-384766 san=[127.0.0.1 192.168.49.2 functional-384766 localhost minikube]
	I1222 22:51:18.330118  146734 provision.go:177] copyRemoteCerts
	I1222 22:51:18.330177  146734 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 22:51:18.330210  146734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:51:18.347420  146734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
	I1222 22:51:18.447556  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1222 22:51:18.447646  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 22:51:18.464129  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1222 22:51:18.464180  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 22:51:18.480702  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1222 22:51:18.480757  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1222 22:51:18.496998  146734 provision.go:87] duration metric: took 250.195084ms to configureAuth
	I1222 22:51:18.497021  146734 ubuntu.go:206] setting minikube options for container-runtime
	I1222 22:51:18.497168  146734 config.go:182] Loaded profile config "functional-384766": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1222 22:51:18.497218  146734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:51:18.514380  146734 main.go:144] libmachine: Using SSH client type: native
	I1222 22:51:18.514623  146734 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:51:18.514636  146734 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1222 22:51:18.655354  146734 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1222 22:51:18.655383  146734 ubuntu.go:71] root file system type: overlay
	I1222 22:51:18.655533  146734 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1222 22:51:18.655634  146734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:51:18.673540  146734 main.go:144] libmachine: Using SSH client type: native
	I1222 22:51:18.673819  146734 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:51:18.673915  146734 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1222 22:51:18.823487  146734 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1222 22:51:18.823601  146734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:51:18.841347  146734 main.go:144] libmachine: Using SSH client type: native
	I1222 22:51:18.841608  146734 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:51:18.841639  146734 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1222 22:51:18.987007  146734 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 22:51:18.987042  146734 machine.go:97] duration metric: took 1.225905804s to provisionDockerMachine
	I1222 22:51:18.987059  146734 start.go:293] postStartSetup for "functional-384766" (driver="docker")
	I1222 22:51:18.987075  146734 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 22:51:18.987145  146734 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 22:51:18.987199  146734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:51:19.006696  146734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
	I1222 22:51:19.107530  146734 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 22:51:19.110931  146734 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1222 22:51:19.110952  146734 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1222 22:51:19.110959  146734 command_runner.go:130] > VERSION_ID="12"
	I1222 22:51:19.110964  146734 command_runner.go:130] > VERSION="12 (bookworm)"
	I1222 22:51:19.110979  146734 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1222 22:51:19.110985  146734 command_runner.go:130] > ID=debian
	I1222 22:51:19.110992  146734 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1222 22:51:19.111000  146734 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1222 22:51:19.111012  146734 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1222 22:51:19.111100  146734 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 22:51:19.111124  146734 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 22:51:19.111137  146734 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-72233/.minikube/addons for local assets ...
	I1222 22:51:19.111205  146734 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-72233/.minikube/files for local assets ...
	I1222 22:51:19.111317  146734 filesync.go:149] local asset: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem -> 758032.pem in /etc/ssl/certs
	I1222 22:51:19.111330  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem -> /etc/ssl/certs/758032.pem
	I1222 22:51:19.111426  146734 filesync.go:149] local asset: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/test/nested/copy/75803/hosts -> hosts in /etc/test/nested/copy/75803
	I1222 22:51:19.111434  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/test/nested/copy/75803/hosts -> /etc/test/nested/copy/75803/hosts
	I1222 22:51:19.111495  146734 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/75803
	I1222 22:51:19.119122  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem --> /etc/ssl/certs/758032.pem (1708 bytes)
	I1222 22:51:19.135900  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/test/nested/copy/75803/hosts --> /etc/test/nested/copy/75803/hosts (40 bytes)
	I1222 22:51:19.152438  146734 start.go:296] duration metric: took 165.360222ms for postStartSetup
	I1222 22:51:19.152512  146734 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 22:51:19.152568  146734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:51:19.170181  146734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
	I1222 22:51:19.267525  146734 command_runner.go:130] > 37%
	I1222 22:51:19.267628  146734 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 22:51:19.272133  146734 command_runner.go:130] > 185G
	I1222 22:51:19.272164  146734 fix.go:56] duration metric: took 1.529618595s for fixHost
	I1222 22:51:19.272178  146734 start.go:83] releasing machines lock for "functional-384766", held for 1.529658247s
	I1222 22:51:19.272243  146734 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-384766
	I1222 22:51:19.290506  146734 ssh_runner.go:195] Run: cat /version.json
	I1222 22:51:19.290562  146734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:51:19.290583  146734 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 22:51:19.290685  146734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:51:19.307884  146734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
	I1222 22:51:19.308688  146734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
	I1222 22:51:19.461522  146734 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1222 22:51:19.463216  146734 command_runner.go:130] > {"iso_version": "v1.37.0-1766254259-22261", "kicbase_version": "v0.0.48-1766394456-22288", "minikube_version": "v1.37.0", "commit": "069cfc84263169a672fdad8d37486b5cb35673ac"}
	I1222 22:51:19.463366  146734 ssh_runner.go:195] Run: systemctl --version
	I1222 22:51:19.469697  146734 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1222 22:51:19.469761  146734 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1222 22:51:19.469847  146734 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1222 22:51:19.474292  146734 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1222 22:51:19.474367  146734 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 22:51:19.474416  146734 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 22:51:19.482031  146734 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1222 22:51:19.482056  146734 start.go:496] detecting cgroup driver to use...
	I1222 22:51:19.482091  146734 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 22:51:19.482215  146734 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 22:51:19.495227  146734 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1222 22:51:19.495298  146734 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1222 22:51:19.503438  146734 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1222 22:51:19.511525  146734 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1222 22:51:19.511574  146734 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1222 22:51:19.519676  146734 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 22:51:19.527517  146734 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1222 22:51:19.535615  146734 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 22:51:19.543569  146734 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 22:51:19.550965  146734 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1222 22:51:19.559037  146734 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1222 22:51:19.567079  146734 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1222 22:51:19.575222  146734 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 22:51:19.582102  146734 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1222 22:51:19.582154  146734 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 22:51:19.588882  146734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 22:51:19.668907  146734 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1222 22:51:19.740882  146734 start.go:496] detecting cgroup driver to use...
	I1222 22:51:19.740926  146734 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 22:51:19.740967  146734 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1222 22:51:19.753727  146734 command_runner.go:130] > # /lib/systemd/system/docker.service
	I1222 22:51:19.753762  146734 command_runner.go:130] > [Unit]
	I1222 22:51:19.753770  146734 command_runner.go:130] > Description=Docker Application Container Engine
	I1222 22:51:19.753778  146734 command_runner.go:130] > Documentation=https://docs.docker.com
	I1222 22:51:19.753787  146734 command_runner.go:130] > After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	I1222 22:51:19.753797  146734 command_runner.go:130] > Wants=network-online.target containerd.service
	I1222 22:51:19.753808  146734 command_runner.go:130] > Requires=docker.socket
	I1222 22:51:19.753815  146734 command_runner.go:130] > StartLimitBurst=3
	I1222 22:51:19.753825  146734 command_runner.go:130] > StartLimitIntervalSec=60
	I1222 22:51:19.753833  146734 command_runner.go:130] > [Service]
	I1222 22:51:19.753841  146734 command_runner.go:130] > Type=notify
	I1222 22:51:19.753848  146734 command_runner.go:130] > Restart=always
	I1222 22:51:19.753862  146734 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1222 22:51:19.753882  146734 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1222 22:51:19.753896  146734 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1222 22:51:19.753910  146734 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1222 22:51:19.753923  146734 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1222 22:51:19.753937  146734 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1222 22:51:19.753952  146734 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1222 22:51:19.753969  146734 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1222 22:51:19.753983  146734 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1222 22:51:19.753991  146734 command_runner.go:130] > ExecStart=
	I1222 22:51:19.754018  146734 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I1222 22:51:19.754031  146734 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1222 22:51:19.754046  146734 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1222 22:51:19.754060  146734 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1222 22:51:19.754067  146734 command_runner.go:130] > LimitNOFILE=infinity
	I1222 22:51:19.754076  146734 command_runner.go:130] > LimitNPROC=infinity
	I1222 22:51:19.754084  146734 command_runner.go:130] > LimitCORE=infinity
	I1222 22:51:19.754095  146734 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1222 22:51:19.754107  146734 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1222 22:51:19.754115  146734 command_runner.go:130] > TasksMax=infinity
	I1222 22:51:19.754124  146734 command_runner.go:130] > TimeoutStartSec=0
	I1222 22:51:19.754137  146734 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1222 22:51:19.754152  146734 command_runner.go:130] > Delegate=yes
	I1222 22:51:19.754162  146734 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1222 22:51:19.754171  146734 command_runner.go:130] > KillMode=process
	I1222 22:51:19.754179  146734 command_runner.go:130] > OOMScoreAdjust=-500
	I1222 22:51:19.754187  146734 command_runner.go:130] > [Install]
	I1222 22:51:19.754196  146734 command_runner.go:130] > WantedBy=multi-user.target
	I1222 22:51:19.754834  146734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 22:51:19.766639  146734 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1222 22:51:19.781290  146734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 22:51:19.792290  146734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1222 22:51:19.803490  146734 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 22:51:19.815697  146734 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1222 22:51:19.816642  146734 ssh_runner.go:195] Run: which cri-dockerd
	I1222 22:51:19.820210  146734 command_runner.go:130] > /usr/bin/cri-dockerd
	I1222 22:51:19.820315  146734 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1222 22:51:19.827693  146734 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1222 22:51:19.839649  146734 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1222 22:51:19.921176  146734 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1222 22:51:20.004043  146734 docker.go:578] configuring docker to use "cgroupfs" as cgroup driver...
	I1222 22:51:20.004160  146734 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1222 22:51:20.017007  146734 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1222 22:51:20.028524  146734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 22:51:20.107815  146734 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1222 22:51:20.801234  146734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 22:51:20.813428  146734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1222 22:51:20.824782  146734 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1222 22:51:20.839450  146734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1222 22:51:20.850829  146734 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1222 22:51:20.931099  146734 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1222 22:51:21.012149  146734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 22:51:21.092742  146734 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1222 22:51:21.120647  146734 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1222 22:51:21.132196  146734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 22:51:21.256485  146734 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1222 22:51:21.327564  146734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1222 22:51:21.340042  146734 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1222 22:51:21.340117  146734 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1222 22:51:21.343842  146734 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1222 22:51:21.343869  146734 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1222 22:51:21.343877  146734 command_runner.go:130] > Device: 0,75	Inode: 1744        Links: 1
	I1222 22:51:21.343888  146734 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  997/  docker)
	I1222 22:51:21.343895  146734 command_runner.go:130] > Access: 2025-12-22 22:51:21.266838495 +0000
	I1222 22:51:21.343909  146734 command_runner.go:130] > Modify: 2025-12-22 22:51:21.266838495 +0000
	I1222 22:51:21.343924  146734 command_runner.go:130] > Change: 2025-12-22 22:51:21.279839753 +0000
	I1222 22:51:21.343935  146734 command_runner.go:130] >  Birth: 2025-12-22 22:51:21.266838495 +0000
	I1222 22:51:21.343976  146734 start.go:564] Will wait 60s for crictl version
	I1222 22:51:21.344020  146734 ssh_runner.go:195] Run: which crictl
	I1222 22:51:21.347282  146734 command_runner.go:130] > /usr/local/bin/crictl
	I1222 22:51:21.347341  146734 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 22:51:21.370719  146734 command_runner.go:130] > Version:  0.1.0
	I1222 22:51:21.370739  146734 command_runner.go:130] > RuntimeName:  docker
	I1222 22:51:21.370743  146734 command_runner.go:130] > RuntimeVersion:  29.1.3
	I1222 22:51:21.370748  146734 command_runner.go:130] > RuntimeApiVersion:  v1
	I1222 22:51:21.370764  146734 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1222 22:51:21.370812  146734 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1222 22:51:21.395767  146734 command_runner.go:130] > 29.1.3
	I1222 22:51:21.395836  146734 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1222 22:51:21.418820  146734 command_runner.go:130] > 29.1.3
	I1222 22:51:21.422122  146734 out.go:252] * Preparing Kubernetes v1.35.0-rc.1 on Docker 29.1.3 ...
	I1222 22:51:21.422206  146734 cli_runner.go:164] Run: docker network inspect functional-384766 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 22:51:21.439338  146734 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1222 22:51:21.443526  146734 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1222 22:51:21.443628  146734 kubeadm.go:884] updating cluster {Name:functional-384766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-384766 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1222 22:51:21.443753  146734 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1222 22:51:21.443822  146734 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1222 22:51:21.464281  146734 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1222 22:51:21.464308  146734 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1222 22:51:21.464318  146734 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1222 22:51:21.464325  146734 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1222 22:51:21.464332  146734 command_runner.go:130] > registry.k8s.io/etcd:3.6.6-0
	I1222 22:51:21.464340  146734 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1222 22:51:21.464348  146734 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1222 22:51:21.464366  146734 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 22:51:21.464395  146734 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1222 22:51:21.464407  146734 docker.go:624] Images already preloaded, skipping extraction
	I1222 22:51:21.464455  146734 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1222 22:51:21.482666  146734 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1222 22:51:21.482684  146734 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1222 22:51:21.482690  146734 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1222 22:51:21.482697  146734 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1222 22:51:21.482704  146734 command_runner.go:130] > registry.k8s.io/etcd:3.6.6-0
	I1222 22:51:21.482712  146734 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1222 22:51:21.482729  146734 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1222 22:51:21.482739  146734 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 22:51:21.483998  146734 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1222 22:51:21.484022  146734 cache_images.go:86] Images are preloaded, skipping loading
	I1222 22:51:21.484036  146734 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 docker true true} ...
	I1222 22:51:21.484172  146734 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-384766 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-384766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 22:51:21.484238  146734 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1222 22:51:21.532066  146734 command_runner.go:130] > cgroupfs
	I1222 22:51:21.533783  146734 cni.go:84] Creating CNI manager for ""
	I1222 22:51:21.533808  146734 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1222 22:51:21.533825  146734 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 22:51:21.533845  146734 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-384766 NodeName:functional-384766 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 22:51:21.533961  146734 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-384766"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 22:51:21.534020  146734 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 22:51:21.542124  146734 command_runner.go:130] > kubeadm
	I1222 22:51:21.542141  146734 command_runner.go:130] > kubectl
	I1222 22:51:21.542144  146734 command_runner.go:130] > kubelet
	I1222 22:51:21.542165  146734 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 22:51:21.542214  146734 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 22:51:21.549624  146734 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1222 22:51:21.561393  146734 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 22:51:21.572932  146734 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I1222 22:51:21.584412  146734 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1222 22:51:21.587798  146734 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1222 22:51:21.587903  146734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 22:51:21.667778  146734 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 22:51:21.997732  146734 certs.go:69] Setting up /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766 for IP: 192.168.49.2
	I1222 22:51:21.997755  146734 certs.go:195] generating shared ca certs ...
	I1222 22:51:21.997774  146734 certs.go:227] acquiring lock for ca certs: {Name:mk952cc8302daab7c0050aedd5db4002f6808128 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 22:51:21.997942  146734 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.key
	I1222 22:51:21.998024  146734 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.key
	I1222 22:51:21.998042  146734 certs.go:257] generating profile certs ...
	I1222 22:51:21.998184  146734 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.key
	I1222 22:51:21.998247  146734 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/apiserver.key.c9e079a8
	I1222 22:51:21.998298  146734 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/proxy-client.key
	I1222 22:51:21.998317  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1222 22:51:21.998340  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1222 22:51:21.998365  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1222 22:51:21.998382  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1222 22:51:21.998399  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1222 22:51:21.998418  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1222 22:51:21.998436  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1222 22:51:21.998454  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1222 22:51:21.998527  146734 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803.pem (1338 bytes)
	W1222 22:51:21.998578  146734 certs.go:480] ignoring /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803_empty.pem, impossibly tiny 0 bytes
	I1222 22:51:21.998635  146734 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem (1675 bytes)
	I1222 22:51:21.998684  146734 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem (1082 bytes)
	I1222 22:51:21.998717  146734 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem (1123 bytes)
	I1222 22:51:21.998750  146734 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem (1679 bytes)
	I1222 22:51:21.998813  146734 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem (1708 bytes)
	I1222 22:51:21.998854  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem -> /usr/share/ca-certificates/758032.pem
	I1222 22:51:21.998877  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1222 22:51:21.998896  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803.pem -> /usr/share/ca-certificates/75803.pem
	I1222 22:51:21.999493  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 22:51:22.018141  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 22:51:22.036416  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 22:51:22.053080  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 22:51:22.069323  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 22:51:22.085369  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 22:51:22.101485  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 22:51:22.117634  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1222 22:51:22.133612  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem --> /usr/share/ca-certificates/758032.pem (1708 bytes)
	I1222 22:51:22.150125  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 22:51:22.166578  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803.pem --> /usr/share/ca-certificates/75803.pem (1338 bytes)
	I1222 22:51:22.182911  146734 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 22:51:22.194486  146734 ssh_runner.go:195] Run: openssl version
	I1222 22:51:22.199935  146734 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1222 22:51:22.200169  146734 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 22:51:22.206913  146734 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 22:51:22.213732  146734 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 22:51:22.217037  146734 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 22 22:33 /usr/share/ca-certificates/minikubeCA.pem
	I1222 22:51:22.217075  146734 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 22:33 /usr/share/ca-certificates/minikubeCA.pem
	I1222 22:51:22.217111  146734 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 22:51:22.249675  146734 command_runner.go:130] > b5213941
	I1222 22:51:22.250033  146734 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 22:51:22.257095  146734 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/75803.pem
	I1222 22:51:22.264071  146734 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/75803.pem /etc/ssl/certs/75803.pem
	I1222 22:51:22.271042  146734 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75803.pem
	I1222 22:51:22.274411  146734 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 22 22:42 /usr/share/ca-certificates/75803.pem
	I1222 22:51:22.274445  146734 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 22:42 /usr/share/ca-certificates/75803.pem
	I1222 22:51:22.274483  146734 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75803.pem
	I1222 22:51:22.307772  146734 command_runner.go:130] > 51391683
	I1222 22:51:22.308113  146734 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 22:51:22.315176  146734 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/758032.pem
	I1222 22:51:22.322196  146734 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/758032.pem /etc/ssl/certs/758032.pem
	I1222 22:51:22.329109  146734 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/758032.pem
	I1222 22:51:22.332667  146734 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 22 22:42 /usr/share/ca-certificates/758032.pem
	I1222 22:51:22.332691  146734 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 22:42 /usr/share/ca-certificates/758032.pem
	I1222 22:51:22.332732  146734 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/758032.pem
	I1222 22:51:22.365940  146734 command_runner.go:130] > 3ec20f2e
	I1222 22:51:22.366181  146734 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 22:51:22.373802  146734 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 22:51:22.377513  146734 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 22:51:22.377537  146734 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1222 22:51:22.377543  146734 command_runner.go:130] > Device: 8,1	Inode: 809094      Links: 1
	I1222 22:51:22.377550  146734 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1222 22:51:22.377558  146734 command_runner.go:130] > Access: 2025-12-22 22:47:15.370061162 +0000
	I1222 22:51:22.377566  146734 command_runner.go:130] > Modify: 2025-12-22 22:43:13.446668027 +0000
	I1222 22:51:22.377574  146734 command_runner.go:130] > Change: 2025-12-22 22:43:13.446668027 +0000
	I1222 22:51:22.377602  146734 command_runner.go:130] >  Birth: 2025-12-22 22:43:13.446668027 +0000
	I1222 22:51:22.377678  146734 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1222 22:51:22.411266  146734 command_runner.go:130] > Certificate will not expire
	I1222 22:51:22.411570  146734 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1222 22:51:22.445025  146734 command_runner.go:130] > Certificate will not expire
	I1222 22:51:22.445322  146734 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1222 22:51:22.479095  146734 command_runner.go:130] > Certificate will not expire
	I1222 22:51:22.479395  146734 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1222 22:51:22.512263  146734 command_runner.go:130] > Certificate will not expire
	I1222 22:51:22.512537  146734 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1222 22:51:22.545264  146734 command_runner.go:130] > Certificate will not expire
	I1222 22:51:22.545554  146734 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1222 22:51:22.578867  146734 command_runner.go:130] > Certificate will not expire
	I1222 22:51:22.579164  146734 kubeadm.go:401] StartCluster: {Name:functional-384766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-384766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 22:51:22.579364  146734 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1222 22:51:22.598061  146734 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 22:51:22.605833  146734 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1222 22:51:22.605851  146734 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1222 22:51:22.605860  146734 command_runner.go:130] > /var/lib/minikube/etcd:
	I1222 22:51:22.605880  146734 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1222 22:51:22.605891  146734 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1222 22:51:22.605932  146734 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1222 22:51:22.613011  146734 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1222 22:51:22.613379  146734 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-384766" does not appear in /home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1222 22:51:22.613493  146734 kubeconfig.go:62] /home/jenkins/minikube-integration/22301-72233/kubeconfig needs updating (will repair): [kubeconfig missing "functional-384766" cluster setting kubeconfig missing "functional-384766" context setting]
	I1222 22:51:22.613840  146734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/kubeconfig: {Name:mkabb5ea92c3fe748f610038efb5c58128364c71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 22:51:22.614238  146734 loader.go:405] Config loaded from file:  /home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1222 22:51:22.614401  146734 kapi.go:59] client config for functional-384766: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.crt", KeyFile:"/home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.key", CAFile:"/home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2765fe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1222 22:51:22.614887  146734 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1222 22:51:22.614906  146734 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1222 22:51:22.614915  146734 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=true
	I1222 22:51:22.614921  146734 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=true
	I1222 22:51:22.614926  146734 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=true
	I1222 22:51:22.614933  146734 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1222 22:51:22.614941  146734 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1222 22:51:22.615340  146734 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1222 22:51:22.622321  146734 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1222 22:51:22.622350  146734 kubeadm.go:602] duration metric: took 16.45181ms to restartPrimaryControlPlane
	I1222 22:51:22.622360  146734 kubeadm.go:403] duration metric: took 43.204719ms to StartCluster
	I1222 22:51:22.622376  146734 settings.go:142] acquiring lock: {Name:mk05aa406dacdbba79fec0b7e7f355491ea46bf8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 22:51:22.622430  146734 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1222 22:51:22.622875  146734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/kubeconfig: {Name:mkabb5ea92c3fe748f610038efb5c58128364c71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 22:51:22.623066  146734 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1222 22:51:22.623138  146734 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1222 22:51:22.623233  146734 addons.go:70] Setting storage-provisioner=true in profile "functional-384766"
	I1222 22:51:22.623261  146734 addons.go:239] Setting addon storage-provisioner=true in "functional-384766"
	I1222 22:51:22.623284  146734 config.go:182] Loaded profile config "functional-384766": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1222 22:51:22.623296  146734 host.go:66] Checking if "functional-384766" exists ...
	I1222 22:51:22.623288  146734 addons.go:70] Setting default-storageclass=true in profile "functional-384766"
	I1222 22:51:22.623322  146734 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-384766"
	I1222 22:51:22.623660  146734 cli_runner.go:164] Run: docker container inspect functional-384766 --format={{.State.Status}}
	I1222 22:51:22.623809  146734 cli_runner.go:164] Run: docker container inspect functional-384766 --format={{.State.Status}}
	I1222 22:51:22.624438  146734 out.go:179] * Verifying Kubernetes components...
	I1222 22:51:22.625531  146734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 22:51:22.644170  146734 loader.go:405] Config loaded from file:  /home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1222 22:51:22.644380  146734 kapi.go:59] client config for functional-384766: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.crt", KeyFile:"/home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.key", CAFile:"/home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2765fe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1222 22:51:22.644456  146734 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 22:51:22.644766  146734 addons.go:239] Setting addon default-storageclass=true in "functional-384766"
	I1222 22:51:22.644810  146734 host.go:66] Checking if "functional-384766" exists ...
	I1222 22:51:22.645336  146734 cli_runner.go:164] Run: docker container inspect functional-384766 --format={{.State.Status}}
	I1222 22:51:22.645513  146734 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:51:22.645531  146734 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1222 22:51:22.645584  146734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:51:22.667387  146734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
	I1222 22:51:22.668028  146734 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1222 22:51:22.668061  146734 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1222 22:51:22.668129  146734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:51:22.686127  146734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
	I1222 22:51:22.735817  146734 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 22:51:22.749391  146734 node_ready.go:35] waiting up to 6m0s for node "functional-384766" to be "Ready" ...
	I1222 22:51:22.749553  146734 type.go:165] "Request Body" body=""
	I1222 22:51:22.749681  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:22.749924  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:22.791529  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:51:22.791702  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:51:22.858228  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:22.858293  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:22.858334  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:22.858349  146734 retry.go:84] will retry after 300ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:22.860247  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:23.114793  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:51:23.124266  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:51:23.170075  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:23.170134  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:23.179073  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:23.179145  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:23.250305  146734 type.go:165] "Request Body" body=""
	I1222 22:51:23.250418  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:23.250774  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:23.384101  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:51:23.434813  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:23.434866  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:23.600155  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:51:23.651352  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:23.651412  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:23.749655  146734 type.go:165] "Request Body" body=""
	I1222 22:51:23.749735  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:23.750072  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:23.901355  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:51:23.952200  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:23.952267  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:24.239666  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:51:24.250121  146734 type.go:165] "Request Body" body=""
	I1222 22:51:24.250189  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:24.250430  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:24.294448  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:24.294492  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:24.750059  146734 type.go:165] "Request Body" body=""
	I1222 22:51:24.750149  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:24.750512  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:24.750582  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:24.937883  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:51:24.989534  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:24.989576  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:25.250004  146734 type.go:165] "Request Body" body=""
	I1222 22:51:25.250083  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:25.250431  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:25.372773  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:51:25.425171  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:25.425216  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:25.749629  146734 type.go:165] "Request Body" body=""
	I1222 22:51:25.749702  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:25.750010  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:26.170572  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:51:26.222069  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:26.222131  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:26.250327  146734 type.go:165] "Request Body" body=""
	I1222 22:51:26.250414  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:26.250759  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:26.440137  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:51:26.491948  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:26.492006  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:26.750438  146734 type.go:165] "Request Body" body=""
	I1222 22:51:26.750538  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:26.750885  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:26.750943  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:27.250541  146734 type.go:165] "Request Body" body=""
	I1222 22:51:27.250646  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:27.250950  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:27.355175  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:51:27.403566  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:27.406149  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:27.749989  146734 type.go:165] "Request Body" body=""
	I1222 22:51:27.750066  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:27.750396  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:28.250002  146734 type.go:165] "Request Body" body=""
	I1222 22:51:28.250075  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:28.250397  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:28.438810  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:51:28.487114  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:28.489616  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:28.750061  146734 type.go:165] "Request Body" body=""
	I1222 22:51:28.750134  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:28.750419  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:29.250032  146734 type.go:165] "Request Body" body=""
	I1222 22:51:29.250106  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:29.250445  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:29.250522  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:29.750041  146734 type.go:165] "Request Body" body=""
	I1222 22:51:29.750138  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:29.750509  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:30.249736  146734 type.go:165] "Request Body" body=""
	I1222 22:51:30.249807  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:30.250111  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:30.636760  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:51:30.689934  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:30.689988  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:30.750216  146734 type.go:165] "Request Body" body=""
	I1222 22:51:30.750316  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:30.750667  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:31.250328  146734 type.go:165] "Request Body" body=""
	I1222 22:51:31.250434  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:31.250799  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:31.250876  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:31.750450  146734 type.go:165] "Request Body" body=""
	I1222 22:51:31.750530  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:31.750869  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:32.250483  146734 type.go:165] "Request Body" body=""
	I1222 22:51:32.250580  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:32.250950  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:32.711876  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:51:32.750368  146734 type.go:165] "Request Body" body=""
	I1222 22:51:32.750445  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:32.750774  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:32.760899  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:32.763771  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:33.250469  146734 type.go:165] "Request Body" body=""
	I1222 22:51:33.250543  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:33.250856  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:33.250917  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:33.406152  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:51:33.457687  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:33.457745  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:33.750192  146734 type.go:165] "Request Body" body=""
	I1222 22:51:33.750291  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:33.750643  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:34.250274  146734 type.go:165] "Request Body" body=""
	I1222 22:51:34.250352  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:34.250676  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:34.749812  146734 type.go:165] "Request Body" body=""
	I1222 22:51:34.749877  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:34.750166  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:35.249755  146734 type.go:165] "Request Body" body=""
	I1222 22:51:35.249850  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:35.250178  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:35.516575  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:51:35.570400  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:35.570450  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:35.749757  146734 type.go:165] "Request Body" body=""
	I1222 22:51:35.749831  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:35.750172  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:35.750238  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:36.249789  146734 type.go:165] "Request Body" body=""
	I1222 22:51:36.249888  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:36.250250  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:36.749817  146734 type.go:165] "Request Body" body=""
	I1222 22:51:36.749889  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:36.750217  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:37.249837  146734 type.go:165] "Request Body" body=""
	I1222 22:51:37.249921  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:37.250262  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:37.750101  146734 type.go:165] "Request Body" body=""
	I1222 22:51:37.750202  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:37.750527  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:37.750609  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:38.250259  146734 type.go:165] "Request Body" body=""
	I1222 22:51:38.250333  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:38.250692  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:38.358924  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:51:38.409955  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:38.410034  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:38.750557  146734 type.go:165] "Request Body" body=""
	I1222 22:51:38.750654  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:38.750998  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:39.249528  146734 type.go:165] "Request Body" body=""
	I1222 22:51:39.249647  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:39.249920  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:39.749563  146734 type.go:165] "Request Body" body=""
	I1222 22:51:39.749697  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:39.750029  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:40.249635  146734 type.go:165] "Request Body" body=""
	I1222 22:51:40.249710  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:40.250037  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:40.250107  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:40.749663  146734 type.go:165] "Request Body" body=""
	I1222 22:51:40.749734  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:40.750058  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:41.249687  146734 type.go:165] "Request Body" body=""
	I1222 22:51:41.249766  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:41.250113  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:41.749718  146734 type.go:165] "Request Body" body=""
	I1222 22:51:41.749834  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:41.750194  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:42.249756  146734 type.go:165] "Request Body" body=""
	I1222 22:51:42.249861  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:42.250194  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:42.250268  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:42.750151  146734 type.go:165] "Request Body" body=""
	I1222 22:51:42.750272  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:42.750674  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:43.250325  146734 type.go:165] "Request Body" body=""
	I1222 22:51:43.250412  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:43.250779  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:43.750422  146734 type.go:165] "Request Body" body=""
	I1222 22:51:43.750532  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:43.750837  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:44.250504  146734 type.go:165] "Request Body" body=""
	I1222 22:51:44.250574  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:44.250924  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:44.250995  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:44.670446  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:51:44.719419  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:44.722302  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:44.722343  146734 retry.go:84] will retry after 11.9s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:44.750527  146734 type.go:165] "Request Body" body=""
	I1222 22:51:44.750632  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:44.750954  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:45.250633  146734 type.go:165] "Request Body" body=""
	I1222 22:51:45.250718  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:45.251044  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:45.749665  146734 type.go:165] "Request Body" body=""
	I1222 22:51:45.749749  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:45.750104  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:46.249725  146734 type.go:165] "Request Body" body=""
	I1222 22:51:46.249806  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:46.250136  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:46.749687  146734 type.go:165] "Request Body" body=""
	I1222 22:51:46.749756  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:46.750050  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:46.750108  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:47.249647  146734 type.go:165] "Request Body" body=""
	I1222 22:51:47.249753  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:47.250081  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:47.750272  146734 type.go:165] "Request Body" body=""
	I1222 22:51:47.750344  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:47.750626  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:48.250351  146734 type.go:165] "Request Body" body=""
	I1222 22:51:48.250445  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:48.250816  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:48.750455  146734 type.go:165] "Request Body" body=""
	I1222 22:51:48.750540  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:48.750902  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:48.750964  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:49.250555  146734 type.go:165] "Request Body" body=""
	I1222 22:51:49.250653  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:49.250985  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:49.750603  146734 type.go:165] "Request Body" body=""
	I1222 22:51:49.750681  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:49.750999  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:50.249551  146734 type.go:165] "Request Body" body=""
	I1222 22:51:50.249641  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:50.249968  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:50.750576  146734 type.go:165] "Request Body" body=""
	I1222 22:51:50.750686  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:50.751008  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:50.751079  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:51.249557  146734 type.go:165] "Request Body" body=""
	I1222 22:51:51.249656  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:51.249983  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:51.749684  146734 type.go:165] "Request Body" body=""
	I1222 22:51:51.749763  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:51.750094  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:52.249698  146734 type.go:165] "Request Body" body=""
	I1222 22:51:52.249792  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:52.250118  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:52.572783  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:51:52.624461  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:52.624509  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:52.624539  146734 retry.go:84] will retry after 8.9s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:52.749682  146734 type.go:165] "Request Body" body=""
	I1222 22:51:52.749751  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:52.750065  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:53.249705  146734 type.go:165] "Request Body" body=""
	I1222 22:51:53.249792  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:53.250136  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:53.250202  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:53.749743  146734 type.go:165] "Request Body" body=""
	I1222 22:51:53.749827  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:53.750165  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:54.249818  146734 type.go:165] "Request Body" body=""
	I1222 22:51:54.249922  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:54.250320  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:54.749879  146734 type.go:165] "Request Body" body=""
	I1222 22:51:54.749983  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:54.750325  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:55.249732  146734 type.go:165] "Request Body" body=""
	I1222 22:51:55.249811  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:55.250186  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:55.250256  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:55.749720  146734 type.go:165] "Request Body" body=""
	I1222 22:51:55.749797  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:55.750101  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:56.249700  146734 type.go:165] "Request Body" body=""
	I1222 22:51:56.249777  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:56.250119  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:56.630698  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:51:56.682682  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:56.682728  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:56.682750  146734 retry.go:84] will retry after 19.1s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:56.749962  146734 type.go:165] "Request Body" body=""
	I1222 22:51:56.750043  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:56.750390  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:57.249782  146734 type.go:165] "Request Body" body=""
	I1222 22:51:57.249859  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:57.250169  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:57.750032  146734 type.go:165] "Request Body" body=""
	I1222 22:51:57.750112  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:57.750459  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:57.750526  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:58.250052  146734 type.go:165] "Request Body" body=""
	I1222 22:51:58.250129  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:58.250484  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:58.750074  146734 type.go:165] "Request Body" body=""
	I1222 22:51:58.750164  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:58.750559  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:59.250376  146734 type.go:165] "Request Body" body=""
	I1222 22:51:59.250455  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:59.250821  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:59.750547  146734 type.go:165] "Request Body" body=""
	I1222 22:51:59.750668  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:59.751053  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:59.751124  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:00.249679  146734 type.go:165] "Request Body" body=""
	I1222 22:52:00.249756  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:00.250124  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:00.749766  146734 type.go:165] "Request Body" body=""
	I1222 22:52:00.749870  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:00.750200  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:01.249805  146734 type.go:165] "Request Body" body=""
	I1222 22:52:01.249878  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:01.250214  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:01.555677  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:52:01.608817  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:52:01.608873  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:52:01.608898  146734 retry.go:84] will retry after 11.1s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:52:01.750139  146734 type.go:165] "Request Body" body=""
	I1222 22:52:01.750232  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:01.750541  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:02.250350  146734 type.go:165] "Request Body" body=""
	I1222 22:52:02.250446  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:02.250884  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:02.250959  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:02.749991  146734 type.go:165] "Request Body" body=""
	I1222 22:52:02.750087  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:02.750489  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:03.249785  146734 type.go:165] "Request Body" body=""
	I1222 22:52:03.249886  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:03.250222  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:03.749863  146734 type.go:165] "Request Body" body=""
	I1222 22:52:03.749953  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:03.750330  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:04.249910  146734 type.go:165] "Request Body" body=""
	I1222 22:52:04.249991  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:04.250323  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:04.749787  146734 type.go:165] "Request Body" body=""
	I1222 22:52:04.749878  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:04.750255  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:04.750328  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:05.249805  146734 type.go:165] "Request Body" body=""
	I1222 22:52:05.249881  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:05.250215  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:05.749829  146734 type.go:165] "Request Body" body=""
	I1222 22:52:05.749905  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:05.750236  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:06.249768  146734 type.go:165] "Request Body" body=""
	I1222 22:52:06.249853  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:06.250166  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:06.749813  146734 type.go:165] "Request Body" body=""
	I1222 22:52:06.749962  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:06.750292  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:06.750353  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:07.249913  146734 type.go:165] "Request Body" body=""
	I1222 22:52:07.249997  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:07.250350  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:07.750157  146734 type.go:165] "Request Body" body=""
	I1222 22:52:07.750249  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:07.750625  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:08.250269  146734 type.go:165] "Request Body" body=""
	I1222 22:52:08.250349  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:08.250699  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:08.750338  146734 type.go:165] "Request Body" body=""
	I1222 22:52:08.750417  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:08.750817  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:08.750880  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:09.250447  146734 type.go:165] "Request Body" body=""
	I1222 22:52:09.250517  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:09.250886  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:09.750542  146734 type.go:165] "Request Body" body=""
	I1222 22:52:09.750651  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:09.751017  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:10.249569  146734 type.go:165] "Request Body" body=""
	I1222 22:52:10.249667  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:10.250007  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:10.749614  146734 type.go:165] "Request Body" body=""
	I1222 22:52:10.749698  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:10.749986  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:11.249644  146734 type.go:165] "Request Body" body=""
	I1222 22:52:11.249721  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:11.250050  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:11.250115  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:11.749702  146734 type.go:165] "Request Body" body=""
	I1222 22:52:11.749781  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:11.750159  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:12.250570  146734 type.go:165] "Request Body" body=""
	I1222 22:52:12.250676  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:12.251019  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:12.749204  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:52:12.749953  146734 type.go:165] "Request Body" body=""
	I1222 22:52:12.750037  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:12.750364  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:12.803295  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:52:12.803361  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:52:12.803388  146734 retry.go:84] will retry after 41s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:52:13.249864  146734 type.go:165] "Request Body" body=""
	I1222 22:52:13.249961  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:13.250341  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:13.250413  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:13.749947  146734 type.go:165] "Request Body" body=""
	I1222 22:52:13.750050  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:13.750385  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:14.249969  146734 type.go:165] "Request Body" body=""
	I1222 22:52:14.250047  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:14.250429  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:14.750035  146734 type.go:165] "Request Body" body=""
	I1222 22:52:14.750108  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:14.750442  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:15.249717  146734 type.go:165] "Request Body" body=""
	I1222 22:52:15.249827  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:15.250182  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:15.749736  146734 type.go:165] "Request Body" body=""
	I1222 22:52:15.749817  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:15.750176  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:15.750272  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:15.781356  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:52:15.834579  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:52:15.834644  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:52:15.834678  146734 retry.go:84] will retry after 22s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:52:16.250185  146734 type.go:165] "Request Body" body=""
	I1222 22:52:16.250285  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:16.250641  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:16.750300  146734 type.go:165] "Request Body" body=""
	I1222 22:52:16.750391  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:16.750749  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:17.250375  146734 type.go:165] "Request Body" body=""
	I1222 22:52:17.250470  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:17.250796  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:17.750663  146734 type.go:165] "Request Body" body=""
	I1222 22:52:17.750770  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:17.751155  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:17.751219  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:18.249705  146734 type.go:165] "Request Body" body=""
	I1222 22:52:18.249774  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:18.250099  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:18.749719  146734 type.go:165] "Request Body" body=""
	I1222 22:52:18.749823  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:18.750127  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:19.249772  146734 type.go:165] "Request Body" body=""
	I1222 22:52:19.249846  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:19.250172  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:19.749726  146734 type.go:165] "Request Body" body=""
	I1222 22:52:19.749805  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:19.750128  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:20.249691  146734 type.go:165] "Request Body" body=""
	I1222 22:52:20.249767  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:20.250123  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:20.250193  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:20.749739  146734 type.go:165] "Request Body" body=""
	I1222 22:52:20.749820  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:20.750153  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:21.249730  146734 type.go:165] "Request Body" body=""
	I1222 22:52:21.249821  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:21.250178  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:21.749804  146734 type.go:165] "Request Body" body=""
	I1222 22:52:21.749892  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:21.750232  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:22.249806  146734 type.go:165] "Request Body" body=""
	I1222 22:52:22.249886  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:22.250237  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:22.250298  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:22.750142  146734 type.go:165] "Request Body" body=""
	I1222 22:52:22.750215  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:22.750516  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:23.250222  146734 type.go:165] "Request Body" body=""
	I1222 22:52:23.250332  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:23.250708  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:23.750438  146734 type.go:165] "Request Body" body=""
	I1222 22:52:23.750532  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:23.750972  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:24.249568  146734 type.go:165] "Request Body" body=""
	I1222 22:52:24.249660  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:24.249969  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:24.749566  146734 type.go:165] "Request Body" body=""
	I1222 22:52:24.749670  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:24.750007  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:24.750078  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:25.249560  146734 type.go:165] "Request Body" body=""
	I1222 22:52:25.249668  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:25.250009  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:25.749615  146734 type.go:165] "Request Body" body=""
	I1222 22:52:25.749711  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:25.750086  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:26.249720  146734 type.go:165] "Request Body" body=""
	I1222 22:52:26.249804  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:26.250176  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:26.749791  146734 type.go:165] "Request Body" body=""
	I1222 22:52:26.749896  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:26.750250  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:26.750329  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:27.249744  146734 type.go:165] "Request Body" body=""
	I1222 22:52:27.249822  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:27.250119  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:27.749959  146734 type.go:165] "Request Body" body=""
	I1222 22:52:27.750049  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:27.750408  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:28.249981  146734 type.go:165] "Request Body" body=""
	I1222 22:52:28.250077  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:28.250414  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:28.749755  146734 type.go:165] "Request Body" body=""
	I1222 22:52:28.749827  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:28.750148  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:29.249816  146734 type.go:165] "Request Body" body=""
	I1222 22:52:29.249914  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:29.250248  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:29.250329  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:29.749820  146734 type.go:165] "Request Body" body=""
	I1222 22:52:29.749892  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:29.750206  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:30.249734  146734 type.go:165] "Request Body" body=""
	I1222 22:52:30.249842  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:30.250163  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:30.749684  146734 type.go:165] "Request Body" body=""
	I1222 22:52:30.749763  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:30.750086  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:31.249714  146734 type.go:165] "Request Body" body=""
	I1222 22:52:31.249787  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:31.250100  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:31.749725  146734 type.go:165] "Request Body" body=""
	I1222 22:52:31.749812  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:31.750152  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:31.750223  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:32.249759  146734 type.go:165] "Request Body" body=""
	I1222 22:52:32.249872  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:32.250213  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:32.750284  146734 type.go:165] "Request Body" body=""
	I1222 22:52:32.750382  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:32.750808  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:33.250484  146734 type.go:165] "Request Body" body=""
	I1222 22:52:33.250553  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:33.250856  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:33.750582  146734 type.go:165] "Request Body" body=""
	I1222 22:52:33.750682  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:33.751084  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:33.751151  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:34.249678  146734 type.go:165] "Request Body" body=""
	I1222 22:52:34.249771  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:34.250118  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:34.749750  146734 type.go:165] "Request Body" body=""
	I1222 22:52:34.749834  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:34.750178  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:35.249794  146734 type.go:165] "Request Body" body=""
	I1222 22:52:35.249866  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:35.250195  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:35.749858  146734 type.go:165] "Request Body" body=""
	I1222 22:52:35.749938  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:35.750250  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:36.249945  146734 type.go:165] "Request Body" body=""
	I1222 22:52:36.250030  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:36.250372  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:36.250452  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:36.750049  146734 type.go:165] "Request Body" body=""
	I1222 22:52:36.750122  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:36.750536  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:37.250238  146734 type.go:165] "Request Body" body=""
	I1222 22:52:37.250338  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:37.250710  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:37.750607  146734 type.go:165] "Request Body" body=""
	I1222 22:52:37.750693  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:37.751037  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:37.791261  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:52:37.841791  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:52:37.841848  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:52:37.841882  146734 retry.go:84] will retry after 24.9s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:52:38.250412  146734 type.go:165] "Request Body" body=""
	I1222 22:52:38.250501  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:38.250856  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:38.250927  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:38.750539  146734 type.go:165] "Request Body" body=""
	I1222 22:52:38.750640  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:38.750989  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:39.250534  146734 type.go:165] "Request Body" body=""
	I1222 22:52:39.250619  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:39.250903  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:39.750622  146734 type.go:165] "Request Body" body=""
	I1222 22:52:39.750769  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:39.751121  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:40.249717  146734 type.go:165] "Request Body" body=""
	I1222 22:52:40.249817  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:40.250172  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:40.749725  146734 type.go:165] "Request Body" body=""
	I1222 22:52:40.749800  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:40.750112  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:40.750183  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:41.249733  146734 type.go:165] "Request Body" body=""
	I1222 22:52:41.249816  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:41.250159  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:41.749758  146734 type.go:165] "Request Body" body=""
	I1222 22:52:41.749830  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:41.750172  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:42.249740  146734 type.go:165] "Request Body" body=""
	I1222 22:52:42.249820  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:42.250217  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:42.750262  146734 type.go:165] "Request Body" body=""
	I1222 22:52:42.750373  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:42.750720  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:42.750785  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:43.250407  146734 type.go:165] "Request Body" body=""
	I1222 22:52:43.250502  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:43.250877  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:43.750507  146734 type.go:165] "Request Body" body=""
	I1222 22:52:43.750607  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:43.750955  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:44.250608  146734 type.go:165] "Request Body" body=""
	I1222 22:52:44.250729  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:44.251071  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:44.749661  146734 type.go:165] "Request Body" body=""
	I1222 22:52:44.749764  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:44.750070  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:45.249668  146734 type.go:165] "Request Body" body=""
	I1222 22:52:45.249750  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:45.250041  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:45.250109  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:45.749753  146734 type.go:165] "Request Body" body=""
	I1222 22:52:45.749827  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:45.750181  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:46.249757  146734 type.go:165] "Request Body" body=""
	I1222 22:52:46.249830  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:46.250177  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:46.749749  146734 type.go:165] "Request Body" body=""
	I1222 22:52:46.749824  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:46.750208  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:47.249770  146734 type.go:165] "Request Body" body=""
	I1222 22:52:47.249839  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:47.250180  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:47.250253  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:47.750010  146734 type.go:165] "Request Body" body=""
	I1222 22:52:47.750110  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:47.750420  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:48.249698  146734 type.go:165] "Request Body" body=""
	I1222 22:52:48.249771  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:48.250079  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:48.750048  146734 type.go:165] "Request Body" body=""
	I1222 22:52:48.750143  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:48.750506  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:49.250090  146734 type.go:165] "Request Body" body=""
	I1222 22:52:49.250180  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:49.250514  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:49.250621  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:49.750101  146734 type.go:165] "Request Body" body=""
	I1222 22:52:49.750203  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:49.750541  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:50.250324  146734 type.go:165] "Request Body" body=""
	I1222 22:52:50.250405  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:50.250760  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:50.750452  146734 type.go:165] "Request Body" body=""
	I1222 22:52:50.750541  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:50.750912  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:51.249606  146734 type.go:165] "Request Body" body=""
	I1222 22:52:51.249697  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:51.250034  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:51.749707  146734 type.go:165] "Request Body" body=""
	I1222 22:52:51.749808  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:51.750156  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:51.750217  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:52.249714  146734 type.go:165] "Request Body" body=""
	I1222 22:52:52.249790  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:52.250101  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:52.750121  146734 type.go:165] "Request Body" body=""
	I1222 22:52:52.750209  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:52.750561  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:53.250207  146734 type.go:165] "Request Body" body=""
	I1222 22:52:53.250279  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:53.250649  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:53.750285  146734 type.go:165] "Request Body" body=""
	I1222 22:52:53.750382  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:53.750757  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:53.750818  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:53.825965  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:52:53.875648  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:52:53.878317  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:52:53.878441  146734 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1222 22:52:54.249881  146734 type.go:165] "Request Body" body=""
	I1222 22:52:54.249965  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:54.250291  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:54.749880  146734 type.go:165] "Request Body" body=""
	I1222 22:52:54.749992  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:54.750339  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:55.249961  146734 type.go:165] "Request Body" body=""
	I1222 22:52:55.250051  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:55.250408  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:55.750006  146734 type.go:165] "Request Body" body=""
	I1222 22:52:55.750102  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:55.750525  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:56.250142  146734 type.go:165] "Request Body" body=""
	I1222 22:52:56.250214  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:56.250523  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:56.250588  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:56.750239  146734 type.go:165] "Request Body" body=""
	I1222 22:52:56.750323  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:56.750714  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:57.250354  146734 type.go:165] "Request Body" body=""
	I1222 22:52:57.250424  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:57.250804  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:57.750628  146734 type.go:165] "Request Body" body=""
	I1222 22:52:57.750717  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:57.751065  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:58.249685  146734 type.go:165] "Request Body" body=""
	I1222 22:52:58.249757  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:58.250088  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:58.749668  146734 type.go:165] "Request Body" body=""
	I1222 22:52:58.749748  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:58.750061  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:58.750137  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:59.249726  146734 type.go:165] "Request Body" body=""
	I1222 22:52:59.249899  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:59.250271  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:59.749844  146734 type.go:165] "Request Body" body=""
	I1222 22:52:59.749922  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:59.750248  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:00.249741  146734 type.go:165] "Request Body" body=""
	I1222 22:53:00.249825  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:00.250192  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:00.749724  146734 type.go:165] "Request Body" body=""
	I1222 22:53:00.749826  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:00.750162  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:00.750221  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:01.249640  146734 type.go:165] "Request Body" body=""
	I1222 22:53:01.249734  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:01.250068  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:01.749640  146734 type.go:165] "Request Body" body=""
	I1222 22:53:01.749713  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:01.749993  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:02.249646  146734 type.go:165] "Request Body" body=""
	I1222 22:53:02.249726  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:02.250075  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:02.750082  146734 type.go:165] "Request Body" body=""
	I1222 22:53:02.750162  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:02.750495  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:02.750554  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:02.761644  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:53:02.811523  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:53:02.814145  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:53:02.814242  146734 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1222 22:53:02.815929  146734 out.go:179] * Enabled addons: 
	I1222 22:53:02.817068  146734 addons.go:530] duration metric: took 1m40.193946362s for enable addons: enabled=[]
	I1222 22:53:03.249698  146734 type.go:165] "Request Body" body=""
	I1222 22:53:03.249788  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:03.250104  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:03.749742  146734 type.go:165] "Request Body" body=""
	I1222 22:53:03.749825  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:03.750182  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:04.249738  146734 type.go:165] "Request Body" body=""
	I1222 22:53:04.249809  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:04.250142  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:04.749826  146734 type.go:165] "Request Body" body=""
	I1222 22:53:04.749903  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:04.750163  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:05.249723  146734 type.go:165] "Request Body" body=""
	I1222 22:53:05.249795  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:05.250136  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:05.250198  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:05.749730  146734 type.go:165] "Request Body" body=""
	I1222 22:53:05.749806  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:05.750146  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:06.249727  146734 type.go:165] "Request Body" body=""
	I1222 22:53:06.249793  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:06.250084  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:06.749693  146734 type.go:165] "Request Body" body=""
	I1222 22:53:06.749808  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:06.750149  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:07.249718  146734 type.go:165] "Request Body" body=""
	I1222 22:53:07.249825  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:07.250157  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:07.250228  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:07.749972  146734 type.go:165] "Request Body" body=""
	I1222 22:53:07.750046  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:07.750368  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:08.249951  146734 type.go:165] "Request Body" body=""
	I1222 22:53:08.250025  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:08.250370  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:08.750009  146734 type.go:165] "Request Body" body=""
	I1222 22:53:08.750097  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:08.750447  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:09.250029  146734 type.go:165] "Request Body" body=""
	I1222 22:53:09.250111  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:09.250445  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:09.250517  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:09.750056  146734 type.go:165] "Request Body" body=""
	I1222 22:53:09.750146  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:09.750490  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:10.250067  146734 type.go:165] "Request Body" body=""
	I1222 22:53:10.250142  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:10.250503  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:10.749730  146734 type.go:165] "Request Body" body=""
	I1222 22:53:10.749826  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:10.750162  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:11.249711  146734 type.go:165] "Request Body" body=""
	I1222 22:53:11.249778  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:11.250102  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:11.749714  146734 type.go:165] "Request Body" body=""
	I1222 22:53:11.749820  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:11.750162  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:11.750229  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:12.249740  146734 type.go:165] "Request Body" body=""
	I1222 22:53:12.249829  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:12.250202  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:12.750297  146734 type.go:165] "Request Body" body=""
	I1222 22:53:12.750400  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:12.750823  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:13.250490  146734 type.go:165] "Request Body" body=""
	I1222 22:53:13.250610  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:13.250943  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:13.750581  146734 type.go:165] "Request Body" body=""
	I1222 22:53:13.750675  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:13.751030  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:13.751101  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:14.249749  146734 type.go:165] "Request Body" body=""
	I1222 22:53:14.249848  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:14.250187  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:14.749731  146734 type.go:165] "Request Body" body=""
	I1222 22:53:14.749805  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:14.750129  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:15.249670  146734 type.go:165] "Request Body" body=""
	I1222 22:53:15.249753  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:15.250047  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:15.749620  146734 type.go:165] "Request Body" body=""
	I1222 22:53:15.749692  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:15.750012  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:16.249630  146734 type.go:165] "Request Body" body=""
	I1222 22:53:16.249712  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:16.250033  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:16.250099  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:16.749569  146734 type.go:165] "Request Body" body=""
	I1222 22:53:16.749652  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:16.750014  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:17.249620  146734 type.go:165] "Request Body" body=""
	I1222 22:53:17.249709  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:17.250072  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:17.749929  146734 type.go:165] "Request Body" body=""
	I1222 22:53:17.750004  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:17.750365  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:18.249926  146734 type.go:165] "Request Body" body=""
	I1222 22:53:18.250000  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:18.250371  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:18.250470  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:18.749909  146734 type.go:165] "Request Body" body=""
	I1222 22:53:18.749982  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:18.750366  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:19.249922  146734 type.go:165] "Request Body" body=""
	I1222 22:53:19.250017  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:19.250357  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:19.750009  146734 type.go:165] "Request Body" body=""
	I1222 22:53:19.750311  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:19.750720  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:20.249710  146734 type.go:165] "Request Body" body=""
	I1222 22:53:20.249792  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:20.250138  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:20.749729  146734 type.go:165] "Request Body" body=""
	I1222 22:53:20.749802  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:20.750122  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:20.750181  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:21.249727  146734 type.go:165] "Request Body" body=""
	I1222 22:53:21.249810  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:21.250104  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:21.749698  146734 type.go:165] "Request Body" body=""
	I1222 22:53:21.749771  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:21.750100  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:22.249641  146734 type.go:165] "Request Body" body=""
	I1222 22:53:22.249720  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:22.250039  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:22.750075  146734 type.go:165] "Request Body" body=""
	I1222 22:53:22.750156  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:22.750471  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:22.750535  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:23.250068  146734 type.go:165] "Request Body" body=""
	I1222 22:53:23.250145  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:23.250478  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:23.750036  146734 type.go:165] "Request Body" body=""
	I1222 22:53:23.750110  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:23.750437  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:24.250017  146734 type.go:165] "Request Body" body=""
	I1222 22:53:24.250093  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:24.250476  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:24.750180  146734 type.go:165] "Request Body" body=""
	I1222 22:53:24.750257  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:24.750588  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:24.750677  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:25.250225  146734 type.go:165] "Request Body" body=""
	I1222 22:53:25.250324  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:25.250676  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:25.749787  146734 type.go:165] "Request Body" body=""
	I1222 22:53:25.749860  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:25.750164  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:26.249730  146734 type.go:165] "Request Body" body=""
	I1222 22:53:26.249811  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:26.250140  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:26.749761  146734 type.go:165] "Request Body" body=""
	I1222 22:53:26.749838  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:26.750123  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:27.249731  146734 type.go:165] "Request Body" body=""
	I1222 22:53:27.249803  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:27.250133  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:27.250212  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:27.749969  146734 type.go:165] "Request Body" body=""
	I1222 22:53:27.750075  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:27.750451  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:28.250072  146734 type.go:165] "Request Body" body=""
	I1222 22:53:28.250148  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:28.250489  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:28.749678  146734 type.go:165] "Request Body" body=""
	I1222 22:53:28.749774  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:28.750036  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:29.249723  146734 type.go:165] "Request Body" body=""
	I1222 22:53:29.249804  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:29.250123  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:29.749729  146734 type.go:165] "Request Body" body=""
	I1222 22:53:29.749802  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:29.750160  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:29.750223  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:30.249702  146734 type.go:165] "Request Body" body=""
	I1222 22:53:30.249782  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:30.250134  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:30.749711  146734 type.go:165] "Request Body" body=""
	I1222 22:53:30.749794  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:30.750154  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:31.249754  146734 type.go:165] "Request Body" body=""
	I1222 22:53:31.249836  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:31.250160  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:31.749713  146734 type.go:165] "Request Body" body=""
	I1222 22:53:31.749788  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:31.750082  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:32.249744  146734 type.go:165] "Request Body" body=""
	I1222 22:53:32.249825  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:32.250166  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:32.250234  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:32.750188  146734 type.go:165] "Request Body" body=""
	I1222 22:53:32.750294  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:32.750695  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:33.250318  146734 type.go:165] "Request Body" body=""
	I1222 22:53:33.250417  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:33.250846  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:33.750507  146734 type.go:165] "Request Body" body=""
	I1222 22:53:33.750613  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:33.750990  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:34.249623  146734 type.go:165] "Request Body" body=""
	I1222 22:53:34.249700  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:34.250041  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:34.749622  146734 type.go:165] "Request Body" body=""
	I1222 22:53:34.749696  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:34.749991  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:34.750050  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:35.249689  146734 type.go:165] "Request Body" body=""
	I1222 22:53:35.249763  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:35.250117  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:35.749695  146734 type.go:165] "Request Body" body=""
	I1222 22:53:35.749776  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:35.750097  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:36.249798  146734 type.go:165] "Request Body" body=""
	I1222 22:53:36.249903  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:36.250277  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:36.749846  146734 type.go:165] "Request Body" body=""
	I1222 22:53:36.749925  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:36.750288  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:36.750366  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:37.249900  146734 type.go:165] "Request Body" body=""
	I1222 22:53:37.249980  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:37.250334  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:37.750193  146734 type.go:165] "Request Body" body=""
	I1222 22:53:37.750266  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:37.750582  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:38.250318  146734 type.go:165] "Request Body" body=""
	I1222 22:53:38.250402  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:38.250780  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:38.750488  146734 type.go:165] "Request Body" body=""
	I1222 22:53:38.750565  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:38.750945  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:38.751009  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:39.250555  146734 type.go:165] "Request Body" body=""
	I1222 22:53:39.250638  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:39.250958  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:39.749568  146734 type.go:165] "Request Body" body=""
	I1222 22:53:39.749673  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:39.750024  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:40.249671  146734 type.go:165] "Request Body" body=""
	I1222 22:53:40.249757  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:40.250102  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:40.749680  146734 type.go:165] "Request Body" body=""
	I1222 22:53:40.749755  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:40.750071  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:41.249740  146734 type.go:165] "Request Body" body=""
	I1222 22:53:41.249827  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:41.250168  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:41.250233  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:41.749722  146734 type.go:165] "Request Body" body=""
	I1222 22:53:41.749804  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:41.750139  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:42.249731  146734 type.go:165] "Request Body" body=""
	I1222 22:53:42.249824  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:42.250150  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:42.750207  146734 type.go:165] "Request Body" body=""
	I1222 22:53:42.750296  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:42.750623  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:43.250305  146734 type.go:165] "Request Body" body=""
	I1222 22:53:43.250405  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:43.250821  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:43.250898  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:43.750513  146734 type.go:165] "Request Body" body=""
	I1222 22:53:43.750619  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:43.750993  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:44.249564  146734 type.go:165] "Request Body" body=""
	I1222 22:53:44.249700  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:44.250049  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:44.749717  146734 type.go:165] "Request Body" body=""
	I1222 22:53:44.749793  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:44.750110  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:45.249673  146734 type.go:165] "Request Body" body=""
	I1222 22:53:45.249765  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:45.250110  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:45.749727  146734 type.go:165] "Request Body" body=""
	I1222 22:53:45.749834  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:45.750168  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:45.750236  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:46.249739  146734 type.go:165] "Request Body" body=""
	I1222 22:53:46.249823  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:46.250174  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:46.749761  146734 type.go:165] "Request Body" body=""
	I1222 22:53:46.749836  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:46.750164  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:47.249735  146734 type.go:165] "Request Body" body=""
	I1222 22:53:47.249821  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:47.250177  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:47.749948  146734 type.go:165] "Request Body" body=""
	I1222 22:53:47.750043  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:47.750397  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:47.750471  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:48.249836  146734 type.go:165] "Request Body" body=""
	I1222 22:53:48.249915  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:48.250168  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:48.749809  146734 type.go:165] "Request Body" body=""
	I1222 22:53:48.749888  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:48.750240  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:49.249818  146734 type.go:165] "Request Body" body=""
	I1222 22:53:49.249899  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:49.250266  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:49.749713  146734 type.go:165] "Request Body" body=""
	I1222 22:53:49.749790  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:49.750104  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:50.249693  146734 type.go:165] "Request Body" body=""
	I1222 22:53:50.249771  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:50.250117  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:50.250187  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:50.749730  146734 type.go:165] "Request Body" body=""
	I1222 22:53:50.749812  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:50.750159  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:51.249780  146734 type.go:165] "Request Body" body=""
	I1222 22:53:51.249878  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:51.250247  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:51.749718  146734 type.go:165] "Request Body" body=""
	I1222 22:53:51.749796  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:51.750134  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:52.249742  146734 type.go:165] "Request Body" body=""
	I1222 22:53:52.249827  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:52.250156  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:52.250222  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:52.750193  146734 type.go:165] "Request Body" body=""
	I1222 22:53:52.750262  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:52.750631  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:53.250282  146734 type.go:165] "Request Body" body=""
	I1222 22:53:53.250365  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:53.250710  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:53.750364  146734 type.go:165] "Request Body" body=""
	I1222 22:53:53.750466  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:53.750806  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:54.250614  146734 type.go:165] "Request Body" body=""
	I1222 22:53:54.250720  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:54.251091  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:54.251160  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:54.749719  146734 type.go:165] "Request Body" body=""
	I1222 22:53:54.749797  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:54.750115  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:55.249723  146734 type.go:165] "Request Body" body=""
	I1222 22:53:55.249798  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:55.250115  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:55.749721  146734 type.go:165] "Request Body" body=""
	I1222 22:53:55.749813  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:55.750120  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:56.249788  146734 type.go:165] "Request Body" body=""
	I1222 22:53:56.249864  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:56.250194  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:56.749823  146734 type.go:165] "Request Body" body=""
	I1222 22:53:56.749924  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:56.750260  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:56.750323  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:57.249824  146734 type.go:165] "Request Body" body=""
	I1222 22:53:57.249911  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:57.250258  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:57.750028  146734 type.go:165] "Request Body" body=""
	I1222 22:53:57.750101  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:57.750434  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:58.250067  146734 type.go:165] "Request Body" body=""
	I1222 22:53:58.250165  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:58.250680  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:58.750319  146734 type.go:165] "Request Body" body=""
	I1222 22:53:58.750397  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:58.750751  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:58.750816  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:59.250396  146734 type.go:165] "Request Body" body=""
	I1222 22:53:59.250469  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:59.250838  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:59.750501  146734 type.go:165] "Request Body" body=""
	I1222 22:53:59.750579  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:59.750961  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:00.250615  146734 type.go:165] "Request Body" body=""
	I1222 22:54:00.250698  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:00.251022  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:00.749626  146734 type.go:165] "Request Body" body=""
	I1222 22:54:00.749712  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:00.750067  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:01.249664  146734 type.go:165] "Request Body" body=""
	I1222 22:54:01.249759  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:01.250103  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:01.250170  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:01.749661  146734 type.go:165] "Request Body" body=""
	I1222 22:54:01.749759  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:01.750063  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:02.249721  146734 type.go:165] "Request Body" body=""
	I1222 22:54:02.249806  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:02.250139  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:02.750137  146734 type.go:165] "Request Body" body=""
	I1222 22:54:02.750239  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:02.750626  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:03.249783  146734 type.go:165] "Request Body" body=""
	I1222 22:54:03.249870  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:03.250198  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:03.250261  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:03.749859  146734 type.go:165] "Request Body" body=""
	I1222 22:54:03.749942  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:03.750266  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:04.249837  146734 type.go:165] "Request Body" body=""
	I1222 22:54:04.249924  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:04.250252  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:04.749814  146734 type.go:165] "Request Body" body=""
	I1222 22:54:04.749888  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:04.750218  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:05.249838  146734 type.go:165] "Request Body" body=""
	I1222 22:54:05.249920  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:05.250251  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:05.250311  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:05.749869  146734 type.go:165] "Request Body" body=""
	I1222 22:54:05.749946  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:05.750283  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:06.249832  146734 type.go:165] "Request Body" body=""
	I1222 22:54:06.249906  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:06.250221  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:06.749788  146734 type.go:165] "Request Body" body=""
	I1222 22:54:06.749871  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:06.750209  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:07.249776  146734 type.go:165] "Request Body" body=""
	I1222 22:54:07.249854  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:07.250183  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:07.749835  146734 type.go:165] "Request Body" body=""
	I1222 22:54:07.749920  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:07.750202  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:07.750265  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:08.249824  146734 type.go:165] "Request Body" body=""
	I1222 22:54:08.249910  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:08.250234  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:08.749844  146734 type.go:165] "Request Body" body=""
	I1222 22:54:08.749966  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:08.750291  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:09.249871  146734 type.go:165] "Request Body" body=""
	I1222 22:54:09.249975  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:09.250275  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:09.749927  146734 type.go:165] "Request Body" body=""
	I1222 22:54:09.750002  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:09.750347  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:09.750418  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:10.249886  146734 type.go:165] "Request Body" body=""
	I1222 22:54:10.249966  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:10.250298  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:10.749734  146734 type.go:165] "Request Body" body=""
	I1222 22:54:10.749817  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:10.750166  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:11.249744  146734 type.go:165] "Request Body" body=""
	I1222 22:54:11.249820  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:11.250144  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:11.749770  146734 type.go:165] "Request Body" body=""
	I1222 22:54:11.749862  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:11.750201  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:12.249794  146734 type.go:165] "Request Body" body=""
	I1222 22:54:12.249873  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:12.250162  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:12.250235  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:12.750120  146734 type.go:165] "Request Body" body=""
	I1222 22:54:12.750215  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:12.750539  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:13.250225  146734 type.go:165] "Request Body" body=""
	I1222 22:54:13.250297  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:13.250634  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:13.750359  146734 type.go:165] "Request Body" body=""
	I1222 22:54:13.750457  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:13.750827  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:14.250483  146734 type.go:165] "Request Body" body=""
	I1222 22:54:14.250556  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:14.250898  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:14.250968  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:14.750562  146734 type.go:165] "Request Body" body=""
	I1222 22:54:14.750659  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:14.750987  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:15.250672  146734 type.go:165] "Request Body" body=""
	I1222 22:54:15.250773  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:15.251113  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:15.749701  146734 type.go:165] "Request Body" body=""
	I1222 22:54:15.749784  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:15.750120  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:16.249715  146734 type.go:165] "Request Body" body=""
	I1222 22:54:16.249790  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:16.250112  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:16.749747  146734 type.go:165] "Request Body" body=""
	I1222 22:54:16.749839  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:16.750218  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:16.750293  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:17.249782  146734 type.go:165] "Request Body" body=""
	I1222 22:54:17.249860  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:17.250177  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:17.750000  146734 type.go:165] "Request Body" body=""
	I1222 22:54:17.750088  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:17.750446  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:18.249735  146734 type.go:165] "Request Body" body=""
	I1222 22:54:18.249825  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:18.250179  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:18.749732  146734 type.go:165] "Request Body" body=""
	I1222 22:54:18.749816  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:18.750154  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:19.249741  146734 type.go:165] "Request Body" body=""
	I1222 22:54:19.249830  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:19.250196  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:19.250264  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:19.749730  146734 type.go:165] "Request Body" body=""
	I1222 22:54:19.749819  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:19.750126  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:20.249707  146734 type.go:165] "Request Body" body=""
	I1222 22:54:20.249785  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:20.250091  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:20.749724  146734 type.go:165] "Request Body" body=""
	I1222 22:54:20.749813  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:20.750150  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:21.249721  146734 type.go:165] "Request Body" body=""
	I1222 22:54:21.249797  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:21.250083  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:21.749694  146734 type.go:165] "Request Body" body=""
	I1222 22:54:21.749796  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:21.750142  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:21.750217  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:22.249754  146734 type.go:165] "Request Body" body=""
	I1222 22:54:22.249830  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:22.250160  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:22.750118  146734 type.go:165] "Request Body" body=""
	I1222 22:54:22.750196  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:22.750523  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:23.250322  146734 type.go:165] "Request Body" body=""
	I1222 22:54:23.250400  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:23.250767  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:23.750404  146734 type.go:165] "Request Body" body=""
	I1222 22:54:23.750488  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:23.750857  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:23.750933  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:24.249571  146734 type.go:165] "Request Body" body=""
	I1222 22:54:24.249681  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:24.250051  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:24.749643  146734 type.go:165] "Request Body" body=""
	I1222 22:54:24.749721  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:24.750066  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:25.249682  146734 type.go:165] "Request Body" body=""
	I1222 22:54:25.249768  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:25.250100  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:25.749662  146734 type.go:165] "Request Body" body=""
	I1222 22:54:25.749739  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:25.750065  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:26.249723  146734 type.go:165] "Request Body" body=""
	I1222 22:54:26.249806  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:26.250109  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:26.250210  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:26.749706  146734 type.go:165] "Request Body" body=""
	I1222 22:54:26.749790  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:26.750145  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:27.249715  146734 type.go:165] "Request Body" body=""
	I1222 22:54:27.249788  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:27.250030  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:27.749914  146734 type.go:165] "Request Body" body=""
	I1222 22:54:27.749988  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:27.750304  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:28.249902  146734 type.go:165] "Request Body" body=""
	I1222 22:54:28.249990  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:28.250337  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:28.250411  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:28.749874  146734 type.go:165] "Request Body" body=""
	I1222 22:54:28.749948  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:28.750244  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:29.249918  146734 type.go:165] "Request Body" body=""
	I1222 22:54:29.249996  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:29.250346  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:29.749881  146734 type.go:165] "Request Body" body=""
	I1222 22:54:29.749960  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:29.750287  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:30.249721  146734 type.go:165] "Request Body" body=""
	I1222 22:54:30.249800  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:30.250101  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:30.749693  146734 type.go:165] "Request Body" body=""
	I1222 22:54:30.749779  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:30.750112  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:30.750186  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:31.249759  146734 type.go:165] "Request Body" body=""
	I1222 22:54:31.249844  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:31.250182  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:31.749729  146734 type.go:165] "Request Body" body=""
	I1222 22:54:31.749814  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:31.750130  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:32.249782  146734 type.go:165] "Request Body" body=""
	I1222 22:54:32.249862  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:32.250186  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:32.750121  146734 type.go:165] "Request Body" body=""
	I1222 22:54:32.750207  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:32.750542  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:32.750634  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:33.249784  146734 type.go:165] "Request Body" body=""
	I1222 22:54:33.249855  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:33.250171  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:33.749810  146734 type.go:165] "Request Body" body=""
	I1222 22:54:33.749888  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:33.750215  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:34.249774  146734 type.go:165] "Request Body" body=""
	I1222 22:54:34.249851  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:34.250176  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:34.749708  146734 type.go:165] "Request Body" body=""
	I1222 22:54:34.749777  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:34.750090  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:35.249688  146734 type.go:165] "Request Body" body=""
	I1222 22:54:35.249766  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:35.250089  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:35.250148  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:35.749687  146734 type.go:165] "Request Body" body=""
	I1222 22:54:35.749759  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:35.750079  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:36.249639  146734 type.go:165] "Request Body" body=""
	I1222 22:54:36.249710  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:36.250032  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:36.749672  146734 type.go:165] "Request Body" body=""
	I1222 22:54:36.749746  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:36.750070  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:37.249695  146734 type.go:165] "Request Body" body=""
	I1222 22:54:37.249776  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:37.250115  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:37.250177  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:37.750002  146734 type.go:165] "Request Body" body=""
	I1222 22:54:37.750095  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:37.750456  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:38.250050  146734 type.go:165] "Request Body" body=""
	I1222 22:54:38.250125  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:38.250452  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:38.749992  146734 type.go:165] "Request Body" body=""
	I1222 22:54:38.750073  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:38.750425  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:39.249707  146734 type.go:165] "Request Body" body=""
	I1222 22:54:39.249774  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:39.250018  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:39.749673  146734 type.go:165] "Request Body" body=""
	I1222 22:54:39.749773  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:39.750117  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:39.750178  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:40.249716  146734 type.go:165] "Request Body" body=""
	I1222 22:54:40.249789  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:40.250105  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:40.749704  146734 type.go:165] "Request Body" body=""
	I1222 22:54:40.749797  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:40.750133  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:41.249799  146734 type.go:165] "Request Body" body=""
	I1222 22:54:41.249873  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:41.250194  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:41.749789  146734 type.go:165] "Request Body" body=""
	I1222 22:54:41.749883  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:41.750225  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:41.750287  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:42.249727  146734 type.go:165] "Request Body" body=""
	I1222 22:54:42.249800  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:42.250096  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:42.750174  146734 type.go:165] "Request Body" body=""
	I1222 22:54:42.750257  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:42.750651  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:43.250236  146734 type.go:165] "Request Body" body=""
	I1222 22:54:43.250313  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:43.250694  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:43.750289  146734 type.go:165] "Request Body" body=""
	I1222 22:54:43.750355  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:43.750686  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:43.750759  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:44.250285  146734 type.go:165] "Request Body" body=""
	I1222 22:54:44.250357  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:44.250709  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:44.750302  146734 type.go:165] "Request Body" body=""
	I1222 22:54:44.750384  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:44.750746  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:45.250350  146734 type.go:165] "Request Body" body=""
	I1222 22:54:45.250430  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:45.250764  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:45.750440  146734 type.go:165] "Request Body" body=""
	I1222 22:54:45.750515  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:45.750874  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:45.750949  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:46.250491  146734 type.go:165] "Request Body" body=""
	I1222 22:54:46.250567  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:46.250913  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:46.750576  146734 type.go:165] "Request Body" body=""
	I1222 22:54:46.750673  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:46.751016  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:47.249570  146734 type.go:165] "Request Body" body=""
	I1222 22:54:47.249660  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:47.249996  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:47.749731  146734 type.go:165] "Request Body" body=""
	I1222 22:54:47.749824  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:47.750169  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:48.249741  146734 type.go:165] "Request Body" body=""
	I1222 22:54:48.249822  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:48.250159  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:48.250226  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:48.749795  146734 type.go:165] "Request Body" body=""
	I1222 22:54:48.749894  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:48.750223  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:49.249848  146734 type.go:165] "Request Body" body=""
	I1222 22:54:49.249934  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:49.250267  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:49.749720  146734 type.go:165] "Request Body" body=""
	I1222 22:54:49.749804  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:49.750126  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:50.249670  146734 type.go:165] "Request Body" body=""
	I1222 22:54:50.249750  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:50.250079  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:50.749692  146734 type.go:165] "Request Body" body=""
	I1222 22:54:50.749766  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:50.750099  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:50.750164  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:51.249654  146734 type.go:165] "Request Body" body=""
	I1222 22:54:51.249734  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:51.250056  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:51.749698  146734 type.go:165] "Request Body" body=""
	I1222 22:54:51.749776  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:51.750104  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:52.249708  146734 type.go:165] "Request Body" body=""
	I1222 22:54:52.249803  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:52.250143  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:52.750168  146734 type.go:165] "Request Body" body=""
	I1222 22:54:52.750240  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:52.750636  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:52.750725  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:53.250283  146734 type.go:165] "Request Body" body=""
	I1222 22:54:53.250369  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:53.250696  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:53.750388  146734 type.go:165] "Request Body" body=""
	I1222 22:54:53.750469  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:53.750824  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:54.250460  146734 type.go:165] "Request Body" body=""
	I1222 22:54:54.250538  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:54.250871  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:54.750538  146734 type.go:165] "Request Body" body=""
	I1222 22:54:54.750645  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:54.750985  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:54.751057  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:55.249649  146734 type.go:165] "Request Body" body=""
	I1222 22:54:55.249732  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:55.250057  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:55.749707  146734 type.go:165] "Request Body" body=""
	I1222 22:54:55.749790  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:55.750108  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:56.249688  146734 type.go:165] "Request Body" body=""
	I1222 22:54:56.249766  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:56.250095  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:56.749726  146734 type.go:165] "Request Body" body=""
	I1222 22:54:56.749805  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:56.750146  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:57.249569  146734 type.go:165] "Request Body" body=""
	I1222 22:54:57.249701  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:57.250071  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:57.250148  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:57.750019  146734 type.go:165] "Request Body" body=""
	I1222 22:54:57.750094  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:57.750442  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:58.249983  146734 type.go:165] "Request Body" body=""
	I1222 22:54:58.250056  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:58.250388  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:58.749735  146734 type.go:165] "Request Body" body=""
	I1222 22:54:58.749813  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:58.750122  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:59.249693  146734 type.go:165] "Request Body" body=""
	I1222 22:54:59.249770  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:59.250112  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:59.250182  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:59.749735  146734 type.go:165] "Request Body" body=""
	I1222 22:54:59.749823  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:59.750156  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:00.249747  146734 type.go:165] "Request Body" body=""
	I1222 22:55:00.249832  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:00.250162  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:00.749730  146734 type.go:165] "Request Body" body=""
	I1222 22:55:00.749817  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:00.750139  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:01.249720  146734 type.go:165] "Request Body" body=""
	I1222 22:55:01.249796  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:01.250170  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:01.250234  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:01.749747  146734 type.go:165] "Request Body" body=""
	I1222 22:55:01.749834  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:01.750177  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:02.249773  146734 type.go:165] "Request Body" body=""
	I1222 22:55:02.249853  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:02.250190  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:02.750185  146734 type.go:165] "Request Body" body=""
	I1222 22:55:02.750272  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:02.750679  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:03.250410  146734 type.go:165] "Request Body" body=""
	I1222 22:55:03.250484  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:03.250800  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:03.250864  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:03.750490  146734 type.go:165] "Request Body" body=""
	I1222 22:55:03.750574  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:03.750953  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:04.250588  146734 type.go:165] "Request Body" body=""
	I1222 22:55:04.250700  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:04.251072  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:04.749564  146734 type.go:165] "Request Body" body=""
	I1222 22:55:04.749679  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:04.750072  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:05.249662  146734 type.go:165] "Request Body" body=""
	I1222 22:55:05.249748  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:05.250095  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:05.749749  146734 type.go:165] "Request Body" body=""
	I1222 22:55:05.749828  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:05.750162  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:05.750227  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:06.249723  146734 type.go:165] "Request Body" body=""
	I1222 22:55:06.249815  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:06.250126  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:06.749723  146734 type.go:165] "Request Body" body=""
	I1222 22:55:06.749809  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:06.750146  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:07.249732  146734 type.go:165] "Request Body" body=""
	I1222 22:55:07.249819  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:07.250155  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:07.749808  146734 type.go:165] "Request Body" body=""
	I1222 22:55:07.749885  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:07.750206  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:07.750279  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:08.249799  146734 type.go:165] "Request Body" body=""
	I1222 22:55:08.249870  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:08.250200  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:08.749784  146734 type.go:165] "Request Body" body=""
	I1222 22:55:08.749858  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:08.750157  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:09.249830  146734 type.go:165] "Request Body" body=""
	I1222 22:55:09.249933  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:09.250259  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:09.749895  146734 type.go:165] "Request Body" body=""
	I1222 22:55:09.749973  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:09.750307  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:09.750375  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:10.249899  146734 type.go:165] "Request Body" body=""
	I1222 22:55:10.249974  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:10.250316  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:10.749711  146734 type.go:165] "Request Body" body=""
	I1222 22:55:10.749786  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:10.750166  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:11.249748  146734 type.go:165] "Request Body" body=""
	I1222 22:55:11.249819  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:11.250148  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:11.749856  146734 type.go:165] "Request Body" body=""
	I1222 22:55:11.749994  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:11.750335  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:12.249957  146734 type.go:165] "Request Body" body=""
	I1222 22:55:12.250025  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:12.250328  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:12.250386  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:12.750340  146734 type.go:165] "Request Body" body=""
	I1222 22:55:12.750433  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:12.750899  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:13.250509  146734 type.go:165] "Request Body" body=""
	I1222 22:55:13.250620  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:13.250953  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:13.750574  146734 type.go:165] "Request Body" body=""
	I1222 22:55:13.750664  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:13.750913  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:14.249616  146734 type.go:165] "Request Body" body=""
	I1222 22:55:14.249702  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:14.250052  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:14.749677  146734 type.go:165] "Request Body" body=""
	I1222 22:55:14.749762  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:14.750119  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:14.750178  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:15.249707  146734 type.go:165] "Request Body" body=""
	I1222 22:55:15.249782  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:15.250113  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:15.749689  146734 type.go:165] "Request Body" body=""
	I1222 22:55:15.749763  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:15.750110  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:16.249711  146734 type.go:165] "Request Body" body=""
	I1222 22:55:16.249796  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:16.250119  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:16.749704  146734 type.go:165] "Request Body" body=""
	I1222 22:55:16.749779  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:16.750098  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:17.249718  146734 type.go:165] "Request Body" body=""
	I1222 22:55:17.249799  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:17.250129  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:17.250204  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:17.749909  146734 type.go:165] "Request Body" body=""
	I1222 22:55:17.750010  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:17.750386  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:18.250005  146734 type.go:165] "Request Body" body=""
	I1222 22:55:18.250097  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:18.250519  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:18.750237  146734 type.go:165] "Request Body" body=""
	I1222 22:55:18.750318  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:18.750671  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:19.250360  146734 type.go:165] "Request Body" body=""
	I1222 22:55:19.250435  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:19.250782  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:19.250850  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:19.750393  146734 type.go:165] "Request Body" body=""
	I1222 22:55:19.750473  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:19.750812  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:20.250244  146734 type.go:165] "Request Body" body=""
	I1222 22:55:20.250340  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:20.250766  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:20.749617  146734 type.go:165] "Request Body" body=""
	I1222 22:55:20.749731  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:20.750232  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:21.249738  146734 type.go:165] "Request Body" body=""
	I1222 22:55:21.249818  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:21.250145  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:21.749880  146734 type.go:165] "Request Body" body=""
	I1222 22:55:21.749962  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:21.750345  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:21.750422  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:22.249651  146734 type.go:165] "Request Body" body=""
	I1222 22:55:22.249745  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:22.250098  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:22.750064  146734 type.go:165] "Request Body" body=""
	I1222 22:55:22.750133  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:22.750512  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:23.250053  146734 type.go:165] "Request Body" body=""
	I1222 22:55:23.250126  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:23.250447  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:23.750003  146734 type.go:165] "Request Body" body=""
	I1222 22:55:23.750079  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:23.750487  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:23.750580  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:24.249755  146734 type.go:165] "Request Body" body=""
	I1222 22:55:24.249827  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:24.250165  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:24.749758  146734 type.go:165] "Request Body" body=""
	I1222 22:55:24.749828  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:24.750192  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:25.249696  146734 type.go:165] "Request Body" body=""
	I1222 22:55:25.249769  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:25.250072  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:25.749697  146734 type.go:165] "Request Body" body=""
	I1222 22:55:25.749783  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:25.750159  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:26.249886  146734 type.go:165] "Request Body" body=""
	I1222 22:55:26.249958  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:26.250275  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:26.250336  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:26.750067  146734 type.go:165] "Request Body" body=""
	I1222 22:55:26.750154  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:26.750517  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:27.250329  146734 type.go:165] "Request Body" body=""
	I1222 22:55:27.250410  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:27.250697  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:27.750580  146734 type.go:165] "Request Body" body=""
	I1222 22:55:27.750669  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:27.751022  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:28.249726  146734 type.go:165] "Request Body" body=""
	I1222 22:55:28.249808  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:28.250109  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:28.749730  146734 type.go:165] "Request Body" body=""
	I1222 22:55:28.749811  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:28.750156  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:28.750237  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:29.249909  146734 type.go:165] "Request Body" body=""
	I1222 22:55:29.249982  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:29.250305  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:29.750035  146734 type.go:165] "Request Body" body=""
	I1222 22:55:29.750108  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:29.750450  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:30.250215  146734 type.go:165] "Request Body" body=""
	I1222 22:55:30.250285  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:30.250646  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:30.750488  146734 type.go:165] "Request Body" body=""
	I1222 22:55:30.750567  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:30.750921  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:30.750983  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:31.249701  146734 type.go:165] "Request Body" body=""
	I1222 22:55:31.249780  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:31.250099  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:31.749724  146734 type.go:165] "Request Body" body=""
	I1222 22:55:31.749792  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:31.750145  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:32.249871  146734 type.go:165] "Request Body" body=""
	I1222 22:55:32.249942  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:32.250315  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:32.750300  146734 type.go:165] "Request Body" body=""
	I1222 22:55:32.750384  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:32.750771  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:33.250608  146734 type.go:165] "Request Body" body=""
	I1222 22:55:33.250702  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:33.251142  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:33.251214  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:33.749937  146734 type.go:165] "Request Body" body=""
	I1222 22:55:33.750014  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:33.750358  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:34.250212  146734 type.go:165] "Request Body" body=""
	I1222 22:55:34.250294  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:34.250661  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:34.750449  146734 type.go:165] "Request Body" body=""
	I1222 22:55:34.750521  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:34.750895  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:35.249645  146734 type.go:165] "Request Body" body=""
	I1222 22:55:35.249716  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:35.250054  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:35.749839  146734 type.go:165] "Request Body" body=""
	I1222 22:55:35.749916  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:35.750258  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:35.750321  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:36.249744  146734 type.go:165] "Request Body" body=""
	I1222 22:55:36.249815  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:36.250128  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:36.749704  146734 type.go:165] "Request Body" body=""
	I1222 22:55:36.749780  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:36.750111  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:37.249872  146734 type.go:165] "Request Body" body=""
	I1222 22:55:37.249947  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:37.250270  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:37.750107  146734 type.go:165] "Request Body" body=""
	I1222 22:55:37.750195  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:37.750537  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:37.750607  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:38.250386  146734 type.go:165] "Request Body" body=""
	I1222 22:55:38.250473  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:38.250844  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:38.749618  146734 type.go:165] "Request Body" body=""
	I1222 22:55:38.749699  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:38.750037  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:39.249712  146734 type.go:165] "Request Body" body=""
	I1222 22:55:39.249799  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:39.250114  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:39.749869  146734 type.go:165] "Request Body" body=""
	I1222 22:55:39.749958  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:39.750319  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:40.249717  146734 type.go:165] "Request Body" body=""
	I1222 22:55:40.249789  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:40.250125  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:40.250192  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:40.749723  146734 type.go:165] "Request Body" body=""
	I1222 22:55:40.749805  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:40.750122  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:41.249867  146734 type.go:165] "Request Body" body=""
	I1222 22:55:41.249964  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:41.250300  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:41.750030  146734 type.go:165] "Request Body" body=""
	I1222 22:55:41.750104  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:41.750449  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:42.250237  146734 type.go:165] "Request Body" body=""
	I1222 22:55:42.250316  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:42.250702  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:42.250790  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:42.749698  146734 type.go:165] "Request Body" body=""
	I1222 22:55:42.749778  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:42.750156  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:43.249908  146734 type.go:165] "Request Body" body=""
	I1222 22:55:43.249984  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:43.250316  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:43.749713  146734 type.go:165] "Request Body" body=""
	I1222 22:55:43.749783  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:43.750109  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:44.249847  146734 type.go:165] "Request Body" body=""
	I1222 22:55:44.249920  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:44.250250  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:44.749977  146734 type.go:165] "Request Body" body=""
	I1222 22:55:44.750059  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:44.750429  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:44.750493  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:45.250259  146734 type.go:165] "Request Body" body=""
	I1222 22:55:45.250340  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:45.250671  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:45.750529  146734 type.go:165] "Request Body" body=""
	I1222 22:55:45.750635  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:45.750999  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:46.249762  146734 type.go:165] "Request Body" body=""
	I1222 22:55:46.249839  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:46.250179  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:46.749734  146734 type.go:165] "Request Body" body=""
	I1222 22:55:46.749810  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:46.750159  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:47.249888  146734 type.go:165] "Request Body" body=""
	I1222 22:55:47.249964  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:47.250293  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:47.250367  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:47.750084  146734 type.go:165] "Request Body" body=""
	I1222 22:55:47.750158  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:47.750514  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:48.250329  146734 type.go:165] "Request Body" body=""
	I1222 22:55:48.250397  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:48.250735  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:48.750528  146734 type.go:165] "Request Body" body=""
	I1222 22:55:48.750629  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:48.750960  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:49.249711  146734 type.go:165] "Request Body" body=""
	I1222 22:55:49.249788  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:49.250109  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:49.749756  146734 type.go:165] "Request Body" body=""
	I1222 22:55:49.749840  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:49.750202  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:49.750280  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:50.250567  146734 type.go:165] "Request Body" body=""
	I1222 22:55:50.250668  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:50.251016  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:50.749759  146734 type.go:165] "Request Body" body=""
	I1222 22:55:50.749830  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:50.750146  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:51.249720  146734 type.go:165] "Request Body" body=""
	I1222 22:55:51.249810  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:51.250134  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:51.749846  146734 type.go:165] "Request Body" body=""
	I1222 22:55:51.749921  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:51.750215  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:52.249929  146734 type.go:165] "Request Body" body=""
	I1222 22:55:52.250026  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:52.250348  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:52.250418  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:52.750253  146734 type.go:165] "Request Body" body=""
	I1222 22:55:52.750351  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:52.750714  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:53.250551  146734 type.go:165] "Request Body" body=""
	I1222 22:55:53.250675  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:53.251019  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:53.749793  146734 type.go:165] "Request Body" body=""
	I1222 22:55:53.749867  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:53.750205  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:54.249741  146734 type.go:165] "Request Body" body=""
	I1222 22:55:54.249830  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:54.250172  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:54.749935  146734 type.go:165] "Request Body" body=""
	I1222 22:55:54.750018  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:54.750343  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:54.750406  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:55.250174  146734 type.go:165] "Request Body" body=""
	I1222 22:55:55.250255  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:55.250582  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:55.750396  146734 type.go:165] "Request Body" body=""
	I1222 22:55:55.750466  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:55.750800  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:56.250589  146734 type.go:165] "Request Body" body=""
	I1222 22:55:56.250683  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:56.251003  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:56.749740  146734 type.go:165] "Request Body" body=""
	I1222 22:55:56.749822  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:56.750149  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:57.249700  146734 type.go:165] "Request Body" body=""
	I1222 22:55:57.249767  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:57.250019  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:57.250065  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:57.749911  146734 type.go:165] "Request Body" body=""
	I1222 22:55:57.749984  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:57.750312  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:58.250079  146734 type.go:165] "Request Body" body=""
	I1222 22:55:58.250158  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:58.250482  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:58.749773  146734 type.go:165] "Request Body" body=""
	I1222 22:55:58.749843  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:58.750137  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:59.249988  146734 type.go:165] "Request Body" body=""
	I1222 22:55:59.250074  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:59.250366  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:59.250416  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:59.750170  146734 type.go:165] "Request Body" body=""
	I1222 22:55:59.750247  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:59.750648  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:00.250540  146734 type.go:165] "Request Body" body=""
	I1222 22:56:00.250654  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:00.250961  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:00.749739  146734 type.go:165] "Request Body" body=""
	I1222 22:56:00.749823  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:00.750177  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:01.249895  146734 type.go:165] "Request Body" body=""
	I1222 22:56:01.249986  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:01.250340  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:01.750038  146734 type.go:165] "Request Body" body=""
	I1222 22:56:01.750113  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:01.750490  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:01.750581  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:02.250443  146734 type.go:165] "Request Body" body=""
	I1222 22:56:02.250525  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:02.250895  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:02.749821  146734 type.go:165] "Request Body" body=""
	I1222 22:56:02.749917  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:02.750273  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:03.250163  146734 type.go:165] "Request Body" body=""
	I1222 22:56:03.250251  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:03.250624  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:03.750411  146734 type.go:165] "Request Body" body=""
	I1222 22:56:03.750512  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:03.750893  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:03.750957  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:04.249658  146734 type.go:165] "Request Body" body=""
	I1222 22:56:04.249737  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:04.250088  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:04.749706  146734 type.go:165] "Request Body" body=""
	I1222 22:56:04.749798  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:04.750106  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:05.249956  146734 type.go:165] "Request Body" body=""
	I1222 22:56:05.250031  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:05.250365  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:05.750235  146734 type.go:165] "Request Body" body=""
	I1222 22:56:05.750322  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:05.750688  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:06.250487  146734 type.go:165] "Request Body" body=""
	I1222 22:56:06.250559  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:06.250842  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:06.250893  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:06.749620  146734 type.go:165] "Request Body" body=""
	I1222 22:56:06.749705  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:06.750048  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:07.249785  146734 type.go:165] "Request Body" body=""
	I1222 22:56:07.249875  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:07.250216  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:07.750003  146734 type.go:165] "Request Body" body=""
	I1222 22:56:07.750078  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:07.750408  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:08.250236  146734 type.go:165] "Request Body" body=""
	I1222 22:56:08.250327  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:08.250694  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:08.750527  146734 type.go:165] "Request Body" body=""
	I1222 22:56:08.750631  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:08.750990  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:08.751068  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:09.249747  146734 type.go:165] "Request Body" body=""
	I1222 22:56:09.249842  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:09.250172  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:09.749886  146734 type.go:165] "Request Body" body=""
	I1222 22:56:09.749962  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:09.750308  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:10.249647  146734 type.go:165] "Request Body" body=""
	I1222 22:56:10.249737  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:10.250057  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:10.749709  146734 type.go:165] "Request Body" body=""
	I1222 22:56:10.749783  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:10.750127  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:11.249845  146734 type.go:165] "Request Body" body=""
	I1222 22:56:11.249934  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:11.250260  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:11.250328  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:11.749676  146734 type.go:165] "Request Body" body=""
	I1222 22:56:11.749752  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:11.750075  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:12.249809  146734 type.go:165] "Request Body" body=""
	I1222 22:56:12.249905  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:12.250221  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:12.750192  146734 type.go:165] "Request Body" body=""
	I1222 22:56:12.750274  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:12.750624  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:13.250511  146734 type.go:165] "Request Body" body=""
	I1222 22:56:13.250619  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:13.250949  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:13.251021  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:13.749706  146734 type.go:165] "Request Body" body=""
	I1222 22:56:13.749791  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:13.750112  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:14.249837  146734 type.go:165] "Request Body" body=""
	I1222 22:56:14.249926  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:14.250269  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:14.749994  146734 type.go:165] "Request Body" body=""
	I1222 22:56:14.750085  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:14.750445  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:15.250241  146734 type.go:165] "Request Body" body=""
	I1222 22:56:15.250328  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:15.250679  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:15.750491  146734 type.go:165] "Request Body" body=""
	I1222 22:56:15.750582  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:15.750940  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:15.751003  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:16.249702  146734 type.go:165] "Request Body" body=""
	I1222 22:56:16.249792  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:16.250131  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:16.749842  146734 type.go:165] "Request Body" body=""
	I1222 22:56:16.749926  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:16.750230  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:17.249944  146734 type.go:165] "Request Body" body=""
	I1222 22:56:17.250027  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:17.250345  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:17.750037  146734 type.go:165] "Request Body" body=""
	I1222 22:56:17.750126  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:17.750464  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:18.249806  146734 type.go:165] "Request Body" body=""
	I1222 22:56:18.249894  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:18.250221  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:18.250280  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:18.749968  146734 type.go:165] "Request Body" body=""
	I1222 22:56:18.750042  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:18.750375  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:19.250188  146734 type.go:165] "Request Body" body=""
	I1222 22:56:19.250274  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:19.250616  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:19.750444  146734 type.go:165] "Request Body" body=""
	I1222 22:56:19.750535  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:19.750879  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:20.249610  146734 type.go:165] "Request Body" body=""
	I1222 22:56:20.249709  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:20.250048  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:20.749785  146734 type.go:165] "Request Body" body=""
	I1222 22:56:20.749875  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:20.750205  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:20.750279  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:21.249734  146734 type.go:165] "Request Body" body=""
	I1222 22:56:21.249827  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:21.250200  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:21.749682  146734 type.go:165] "Request Body" body=""
	I1222 22:56:21.749765  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:21.750114  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:22.249836  146734 type.go:165] "Request Body" body=""
	I1222 22:56:22.249924  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:22.250264  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:22.750260  146734 type.go:165] "Request Body" body=""
	I1222 22:56:22.750374  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:22.750715  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:22.750789  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:23.250585  146734 type.go:165] "Request Body" body=""
	I1222 22:56:23.250698  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:23.251037  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:23.749764  146734 type.go:165] "Request Body" body=""
	I1222 22:56:23.749845  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:23.750177  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:24.249961  146734 type.go:165] "Request Body" body=""
	I1222 22:56:24.250049  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:24.250374  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:24.750225  146734 type.go:165] "Request Body" body=""
	I1222 22:56:24.750310  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:24.750702  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:25.250611  146734 type.go:165] "Request Body" body=""
	I1222 22:56:25.250705  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:25.251045  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:25.251126  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:25.749728  146734 type.go:165] "Request Body" body=""
	I1222 22:56:25.749801  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:25.750109  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:26.249827  146734 type.go:165] "Request Body" body=""
	I1222 22:56:26.249913  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:26.250265  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:26.750059  146734 type.go:165] "Request Body" body=""
	I1222 22:56:26.750142  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:26.750486  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:27.250307  146734 type.go:165] "Request Body" body=""
	I1222 22:56:27.250390  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:27.250801  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:27.749556  146734 type.go:165] "Request Body" body=""
	I1222 22:56:27.749670  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:27.749997  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:27.750062  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:28.249747  146734 type.go:165] "Request Body" body=""
	I1222 22:56:28.249843  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:28.250181  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:28.749928  146734 type.go:165] "Request Body" body=""
	I1222 22:56:28.750013  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:28.750333  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:29.250176  146734 type.go:165] "Request Body" body=""
	I1222 22:56:29.250253  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:29.250628  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:29.750430  146734 type.go:165] "Request Body" body=""
	I1222 22:56:29.750516  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:29.750880  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:29.750941  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:30.249655  146734 type.go:165] "Request Body" body=""
	I1222 22:56:30.249745  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:30.250057  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:30.749762  146734 type.go:165] "Request Body" body=""
	I1222 22:56:30.749841  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:30.750176  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:31.249900  146734 type.go:165] "Request Body" body=""
	I1222 22:56:31.249991  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:31.250332  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:31.749728  146734 type.go:165] "Request Body" body=""
	I1222 22:56:31.749800  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:31.750161  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:32.249910  146734 type.go:165] "Request Body" body=""
	I1222 22:56:32.249987  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:32.250372  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:32.250439  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:32.750202  146734 type.go:165] "Request Body" body=""
	I1222 22:56:32.750290  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:32.750667  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:33.250449  146734 type.go:165] "Request Body" body=""
	I1222 22:56:33.250524  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:33.250855  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:33.749570  146734 type.go:165] "Request Body" body=""
	I1222 22:56:33.749674  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:33.750002  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:34.249728  146734 type.go:165] "Request Body" body=""
	I1222 22:56:34.249812  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:34.250144  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:34.749746  146734 type.go:165] "Request Body" body=""
	I1222 22:56:34.749820  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:34.750119  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:34.750176  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:35.249857  146734 type.go:165] "Request Body" body=""
	I1222 22:56:35.249936  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:35.250259  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:35.750048  146734 type.go:165] "Request Body" body=""
	I1222 22:56:35.750143  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:35.750524  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:36.250349  146734 type.go:165] "Request Body" body=""
	I1222 22:56:36.250426  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:36.250769  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:36.750203  146734 type.go:165] "Request Body" body=""
	I1222 22:56:36.750279  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:36.750663  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:36.750725  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:37.250497  146734 type.go:165] "Request Body" body=""
	I1222 22:56:37.250572  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:37.251039  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:37.749807  146734 type.go:165] "Request Body" body=""
	I1222 22:56:37.749884  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:37.750203  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:38.249961  146734 type.go:165] "Request Body" body=""
	I1222 22:56:38.250044  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:38.250402  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:38.750242  146734 type.go:165] "Request Body" body=""
	I1222 22:56:38.750337  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:38.750714  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:38.750783  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:39.249573  146734 type.go:165] "Request Body" body=""
	I1222 22:56:39.249685  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:39.250006  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:39.749755  146734 type.go:165] "Request Body" body=""
	I1222 22:56:39.749837  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:39.750176  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:40.249985  146734 type.go:165] "Request Body" body=""
	I1222 22:56:40.250068  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:40.250462  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:40.749791  146734 type.go:165] "Request Body" body=""
	I1222 22:56:40.749864  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:40.750168  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:41.249972  146734 type.go:165] "Request Body" body=""
	I1222 22:56:41.250067  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:41.250448  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:41.250511  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:41.750285  146734 type.go:165] "Request Body" body=""
	I1222 22:56:41.750360  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:41.750722  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:42.250530  146734 type.go:165] "Request Body" body=""
	I1222 22:56:42.250634  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:42.251005  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:42.750176  146734 type.go:165] "Request Body" body=""
	I1222 22:56:42.750262  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:42.750629  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:43.250421  146734 type.go:165] "Request Body" body=""
	I1222 22:56:43.250500  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:43.250876  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:43.250943  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:43.749662  146734 type.go:165] "Request Body" body=""
	I1222 22:56:43.749749  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:43.750068  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:44.249733  146734 type.go:165] "Request Body" body=""
	I1222 22:56:44.249816  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:44.250156  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:44.749892  146734 type.go:165] "Request Body" body=""
	I1222 22:56:44.749976  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:44.750375  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:45.249835  146734 type.go:165] "Request Body" body=""
	I1222 22:56:45.249925  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:45.250244  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:45.750013  146734 type.go:165] "Request Body" body=""
	I1222 22:56:45.750091  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:45.750446  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:45.750514  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:46.250285  146734 type.go:165] "Request Body" body=""
	I1222 22:56:46.250360  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:46.250755  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:46.749559  146734 type.go:165] "Request Body" body=""
	I1222 22:56:46.749670  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:46.750010  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:47.249802  146734 type.go:165] "Request Body" body=""
	I1222 22:56:47.249878  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:47.250237  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:47.749902  146734 type.go:165] "Request Body" body=""
	I1222 22:56:47.750019  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:47.750394  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:48.250358  146734 type.go:165] "Request Body" body=""
	I1222 22:56:48.250462  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:48.250882  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:48.250970  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:48.749733  146734 type.go:165] "Request Body" body=""
	I1222 22:56:48.749822  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:48.750138  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:49.249857  146734 type.go:165] "Request Body" body=""
	I1222 22:56:49.249934  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:49.250272  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:49.750058  146734 type.go:165] "Request Body" body=""
	I1222 22:56:49.750161  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:49.750546  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:50.250534  146734 type.go:165] "Request Body" body=""
	I1222 22:56:50.250637  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:50.250970  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:50.251043  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:50.749768  146734 type.go:165] "Request Body" body=""
	I1222 22:56:50.749857  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:50.750193  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:51.250007  146734 type.go:165] "Request Body" body=""
	I1222 22:56:51.250097  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:51.250480  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:51.750285  146734 type.go:165] "Request Body" body=""
	I1222 22:56:51.750367  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:51.750694  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:52.250507  146734 type.go:165] "Request Body" body=""
	I1222 22:56:52.250580  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:52.250924  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:52.749846  146734 type.go:165] "Request Body" body=""
	I1222 22:56:52.749917  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:52.750231  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:52.750289  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:53.250037  146734 type.go:165] "Request Body" body=""
	I1222 22:56:53.250116  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:53.250450  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:53.750301  146734 type.go:165] "Request Body" body=""
	I1222 22:56:53.750379  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:53.750711  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:54.250570  146734 type.go:165] "Request Body" body=""
	I1222 22:56:54.250683  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:54.251037  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:54.749778  146734 type.go:165] "Request Body" body=""
	I1222 22:56:54.749870  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:54.750212  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:55.249925  146734 type.go:165] "Request Body" body=""
	I1222 22:56:55.250005  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:55.250325  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:55.250383  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:55.750056  146734 type.go:165] "Request Body" body=""
	I1222 22:56:55.750129  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:55.750431  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:56.250241  146734 type.go:165] "Request Body" body=""
	I1222 22:56:56.250315  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:56.250691  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:56.750506  146734 type.go:165] "Request Body" body=""
	I1222 22:56:56.750582  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:56.750942  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:57.249670  146734 type.go:165] "Request Body" body=""
	I1222 22:56:57.249740  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:57.250040  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:57.749841  146734 type.go:165] "Request Body" body=""
	I1222 22:56:57.749915  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:57.750251  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:57.750326  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:58.249993  146734 type.go:165] "Request Body" body=""
	I1222 22:56:58.250069  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:58.250401  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:58.749827  146734 type.go:165] "Request Body" body=""
	I1222 22:56:58.749905  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:58.750192  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:59.249907  146734 type.go:165] "Request Body" body=""
	I1222 22:56:59.249979  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:59.250306  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:59.750056  146734 type.go:165] "Request Body" body=""
	I1222 22:56:59.750136  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:59.750491  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:59.750565  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:57:00.250323  146734 type.go:165] "Request Body" body=""
	I1222 22:57:00.250408  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:00.250755  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:00.749631  146734 type.go:165] "Request Body" body=""
	I1222 22:57:00.749707  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:00.750021  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:01.249677  146734 type.go:165] "Request Body" body=""
	I1222 22:57:01.249762  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:01.250096  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:01.749829  146734 type.go:165] "Request Body" body=""
	I1222 22:57:01.749919  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:01.750234  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:02.249941  146734 type.go:165] "Request Body" body=""
	I1222 22:57:02.250014  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:02.250348  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:57:02.250421  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:57:02.750212  146734 type.go:165] "Request Body" body=""
	I1222 22:57:02.750300  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:02.750654  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:03.250450  146734 type.go:165] "Request Body" body=""
	I1222 22:57:03.250517  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:03.250851  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:03.749576  146734 type.go:165] "Request Body" body=""
	I1222 22:57:03.749665  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:03.749988  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:04.249767  146734 type.go:165] "Request Body" body=""
	I1222 22:57:04.249842  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:04.250173  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:04.750033  146734 type.go:165] "Request Body" body=""
	I1222 22:57:04.750117  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:04.750502  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:57:04.750569  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:57:05.250312  146734 type.go:165] "Request Body" body=""
	I1222 22:57:05.250398  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:05.250762  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:05.750575  146734 type.go:165] "Request Body" body=""
	I1222 22:57:05.750666  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:05.751012  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:06.249706  146734 type.go:165] "Request Body" body=""
	I1222 22:57:06.249781  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:06.250093  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:06.749823  146734 type.go:165] "Request Body" body=""
	I1222 22:57:06.749898  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:06.750282  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:07.250051  146734 type.go:165] "Request Body" body=""
	I1222 22:57:07.250124  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:07.250473  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:57:07.250533  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:57:07.750214  146734 type.go:165] "Request Body" body=""
	I1222 22:57:07.750298  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:07.750580  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:08.250395  146734 type.go:165] "Request Body" body=""
	I1222 22:57:08.250486  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:08.250871  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:08.749688  146734 type.go:165] "Request Body" body=""
	I1222 22:57:08.749763  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:08.750089  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:09.249707  146734 type.go:165] "Request Body" body=""
	I1222 22:57:09.249788  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:09.250145  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:09.749900  146734 type.go:165] "Request Body" body=""
	I1222 22:57:09.749983  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:09.750354  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:57:09.750431  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:57:10.250176  146734 type.go:165] "Request Body" body=""
	I1222 22:57:10.250252  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:10.250587  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:10.750397  146734 type.go:165] "Request Body" body=""
	I1222 22:57:10.750472  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:10.750832  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:11.249645  146734 type.go:165] "Request Body" body=""
	I1222 22:57:11.249739  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:11.250107  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:11.749864  146734 type.go:165] "Request Body" body=""
	I1222 22:57:11.749962  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:11.750316  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:12.249663  146734 type.go:165] "Request Body" body=""
	I1222 22:57:12.249748  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:12.250105  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:57:12.250178  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:57:12.750096  146734 type.go:165] "Request Body" body=""
	I1222 22:57:12.750174  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:12.750521  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:13.250403  146734 type.go:165] "Request Body" body=""
	I1222 22:57:13.250481  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:13.250854  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:13.749624  146734 type.go:165] "Request Body" body=""
	I1222 22:57:13.749717  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:13.750062  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:14.249768  146734 type.go:165] "Request Body" body=""
	I1222 22:57:14.249842  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:14.250173  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:57:14.250237  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:57:14.749931  146734 type.go:165] "Request Body" body=""
	I1222 22:57:14.750016  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:14.750331  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:15.250077  146734 type.go:165] "Request Body" body=""
	I1222 22:57:15.250149  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:15.250459  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:15.750281  146734 type.go:165] "Request Body" body=""
	I1222 22:57:15.750366  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:15.750687  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:16.250490  146734 type.go:165] "Request Body" body=""
	I1222 22:57:16.250582  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:16.250949  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:57:16.251012  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:57:16.749742  146734 type.go:165] "Request Body" body=""
	I1222 22:57:16.749829  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:16.750173  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:17.249915  146734 type.go:165] "Request Body" body=""
	I1222 22:57:17.250011  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:17.250323  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:17.750089  146734 type.go:165] "Request Body" body=""
	I1222 22:57:17.750167  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:17.750505  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:18.250322  146734 type.go:165] "Request Body" body=""
	I1222 22:57:18.250402  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:18.250802  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:18.749588  146734 type.go:165] "Request Body" body=""
	I1222 22:57:18.749719  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:18.750078  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:57:18.750142  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:57:19.249857  146734 type.go:165] "Request Body" body=""
	I1222 22:57:19.249933  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:19.250256  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:19.749770  146734 type.go:165] "Request Body" body=""
	I1222 22:57:19.749854  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:19.750208  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:20.249748  146734 type.go:165] "Request Body" body=""
	I1222 22:57:20.249831  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:20.250157  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:20.750100  146734 type.go:165] "Request Body" body=""
	I1222 22:57:20.750194  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:20.750870  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:57:20.750943  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:57:21.249638  146734 type.go:165] "Request Body" body=""
	I1222 22:57:21.249718  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:21.250054  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:21.749626  146734 type.go:165] "Request Body" body=""
	I1222 22:57:21.749701  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:21.750025  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:22.249683  146734 type.go:165] "Request Body" body=""
	I1222 22:57:22.249766  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:22.250110  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:22.750117  146734 node_ready.go:38] duration metric: took 6m0.000675026s for node "functional-384766" to be "Ready" ...
	I1222 22:57:22.752685  146734 out.go:203] 
	W1222 22:57:22.753745  146734 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1222 22:57:22.753760  146734 out.go:285] * 
	* 
	W1222 22:57:22.753991  146734 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 22:57:22.755053  146734 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:676: failed to soft start minikube. args "out/minikube-linux-amd64 start -p functional-384766 --alsologtostderr -v=8": exit status 80
functional_test.go:678: soft start took 6m5.569463785s for "functional-384766" cluster.
I1222 22:57:23.082535   75803 config.go:182] Loaded profile config "functional-384766": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-384766
helpers_test.go:244: (dbg) docker inspect functional-384766:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c",
	        "Created": "2025-12-22T22:43:03.818900502Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 134904,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T22:43:03.847527913Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9a87e850a5e640dd3e5f71477885272b970ba271e3722be8bebbe0157f704ffd",
	        "ResolvConfPath": "/var/lib/docker/containers/e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c/hostname",
	        "HostsPath": "/var/lib/docker/containers/e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c/hosts",
	        "LogPath": "/var/lib/docker/containers/e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c/e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c-json.log",
	        "Name": "/functional-384766",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-384766:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-384766",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c",
	                "LowerDir": "/var/lib/docker/overlay2/3e3d10c0ae87018d46767d6a2bb62611a8b9a288f6938e75c60f3cd57119d4bf-init/diff:/var/lib/docker/overlay2/c57dd1a41102d99c4ed6be3c60b871435428bd2cea6a3d8d172f0a67527ba009/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3e3d10c0ae87018d46767d6a2bb62611a8b9a288f6938e75c60f3cd57119d4bf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3e3d10c0ae87018d46767d6a2bb62611a8b9a288f6938e75c60f3cd57119d4bf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3e3d10c0ae87018d46767d6a2bb62611a8b9a288f6938e75c60f3cd57119d4bf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-384766",
	                "Source": "/var/lib/docker/volumes/functional-384766/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-384766",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-384766",
	                "name.minikube.sigs.k8s.io": "functional-384766",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d6f65d275ad1e1cfaea153f23b0c094464e089c27de9a12387045fa2c863e00e",
	            "SandboxKey": "/var/run/docker/netns/d6f65d275ad1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-384766": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1b177601c4f3a252e4feb1553da3a4110e40d5b9ed2bd5de6789f2bc9f8f5c2b",
	                    "EndpointID": "2c787f98c5d836612c102f7592dc2eccfef09327c2a6cadf1319fd6559b5eca8",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "d6:90:04:78:9b:e3",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-384766",
	                        "e126b999cc06"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-384766 -n functional-384766
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-384766 -n functional-384766: exit status 2 (293.302605ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                                           ARGS                                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ update-context │ functional-580825 update-context --alsologtostderr -v=2                                                                                                                                  │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ update-context │ functional-580825 update-context --alsologtostderr -v=2                                                                                                                                  │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ update-context │ functional-580825 update-context --alsologtostderr -v=2                                                                                                                                  │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image          │ functional-580825 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-580825 --alsologtostderr                                                              │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image          │ functional-580825 image ls                                                                                                                                                               │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image          │ functional-580825 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-580825 --alsologtostderr                                                              │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image          │ functional-580825 image ls                                                                                                                                                               │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image          │ functional-580825 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-580825 --alsologtostderr                                                              │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image          │ functional-580825 image ls                                                                                                                                                               │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image          │ functional-580825 image save ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-580825 /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image          │ functional-580825 image rm ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-580825 --alsologtostderr                                                                         │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image          │ functional-580825 image ls                                                                                                                                                               │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image          │ functional-580825 image load /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr                                                                     │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image          │ functional-580825 image ls                                                                                                                                                               │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image          │ functional-580825 image save --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-580825 --alsologtostderr                                                              │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ ssh            │ functional-580825 ssh pgrep buildkitd                                                                                                                                                    │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │                     │
	│ image          │ functional-580825 image ls --format json --alsologtostderr                                                                                                                               │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image          │ functional-580825 image ls --format short --alsologtostderr                                                                                                                              │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │                     │
	│ image          │ functional-580825 image ls --format yaml --alsologtostderr                                                                                                                               │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image          │ functional-580825 image ls --format table --alsologtostderr                                                                                                                              │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image          │ functional-580825 image build -t localhost/my-image:functional-580825 testdata/build --alsologtostderr                                                                                   │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image          │ functional-580825 image ls                                                                                                                                                               │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ delete         │ -p functional-580825                                                                                                                                                                     │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ start          │ -p functional-384766 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-rc.1                                        │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │                     │
	│ start          │ -p functional-384766 --alsologtostderr -v=8                                                                                                                                              │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 22:51 UTC │                     │
	└────────────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 22:51:17
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 22:51:17.565426  146734 out.go:360] Setting OutFile to fd 1 ...
	I1222 22:51:17.565716  146734 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 22:51:17.565727  146734 out.go:374] Setting ErrFile to fd 2...
	I1222 22:51:17.565732  146734 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 22:51:17.565972  146734 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-72233/.minikube/bin
	I1222 22:51:17.566463  146734 out.go:368] Setting JSON to false
	I1222 22:51:17.567434  146734 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":9218,"bootTime":1766434660,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1222 22:51:17.567486  146734 start.go:143] virtualization: kvm guest
	I1222 22:51:17.569465  146734 out.go:179] * [functional-384766] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1222 22:51:17.570460  146734 out.go:179]   - MINIKUBE_LOCATION=22301
	I1222 22:51:17.570465  146734 notify.go:221] Checking for updates...
	I1222 22:51:17.572456  146734 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 22:51:17.573608  146734 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1222 22:51:17.574791  146734 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-72233/.minikube
	I1222 22:51:17.575840  146734 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1222 22:51:17.576824  146734 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 22:51:17.578279  146734 config.go:182] Loaded profile config "functional-384766": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1222 22:51:17.578404  146734 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 22:51:17.602058  146734 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1222 22:51:17.602223  146734 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 22:51:17.652786  146734 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-12-22 22:51:17.644025132 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1222 22:51:17.652901  146734 docker.go:319] overlay module found
	I1222 22:51:17.655127  146734 out.go:179] * Using the docker driver based on existing profile
	I1222 22:51:17.656150  146734 start.go:309] selected driver: docker
	I1222 22:51:17.656165  146734 start.go:928] validating driver "docker" against &{Name:functional-384766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-384766 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 22:51:17.656249  146734 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 22:51:17.656337  146734 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 22:51:17.716062  146734 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-12-22 22:51:17.707507925 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1222 22:51:17.716919  146734 cni.go:84] Creating CNI manager for ""
	I1222 22:51:17.717012  146734 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1222 22:51:17.717085  146734 start.go:353] cluster config:
	{Name:functional-384766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-384766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 22:51:17.719515  146734 out.go:179] * Starting "functional-384766" primary control-plane node in "functional-384766" cluster
	I1222 22:51:17.720631  146734 cache.go:134] Beginning downloading kic base image for docker with docker
	I1222 22:51:17.721792  146734 out.go:179] * Pulling base image v0.0.48-1766394456-22288 ...
	I1222 22:51:17.723064  146734 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1222 22:51:17.723095  146734 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4
	I1222 22:51:17.723112  146734 cache.go:65] Caching tarball of preloaded images
	I1222 22:51:17.723172  146734 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local docker daemon
	I1222 22:51:17.723191  146734 preload.go:251] Found /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1222 22:51:17.723198  146734 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on docker
	I1222 22:51:17.723299  146734 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/config.json ...
	I1222 22:51:17.742349  146734 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local docker daemon, skipping pull
	I1222 22:51:17.742368  146734 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 exists in daemon, skipping load
	I1222 22:51:17.742396  146734 cache.go:243] Successfully downloaded all kic artifacts
	I1222 22:51:17.742444  146734 start.go:360] acquireMachinesLock for functional-384766: {Name:mk956fe60c71d3d96aa218ecf73d6e39f6ab1bf3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 22:51:17.742506  146734 start.go:364] duration metric: took 41.881µs to acquireMachinesLock for "functional-384766"
	I1222 22:51:17.742535  146734 start.go:96] Skipping create...Using existing machine configuration
	I1222 22:51:17.742545  146734 fix.go:54] fixHost starting: 
	I1222 22:51:17.742810  146734 cli_runner.go:164] Run: docker container inspect functional-384766 --format={{.State.Status}}
	I1222 22:51:17.759507  146734 fix.go:112] recreateIfNeeded on functional-384766: state=Running err=<nil>
	W1222 22:51:17.759531  146734 fix.go:138] unexpected machine state, will restart: <nil>
	I1222 22:51:17.761090  146734 out.go:252] * Updating the running docker "functional-384766" container ...
	I1222 22:51:17.761123  146734 machine.go:94] provisionDockerMachine start ...
	I1222 22:51:17.761180  146734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:51:17.778682  146734 main.go:144] libmachine: Using SSH client type: native
	I1222 22:51:17.778900  146734 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:51:17.778912  146734 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 22:51:17.919326  146734 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-384766
	
	I1222 22:51:17.919369  146734 ubuntu.go:182] provisioning hostname "functional-384766"
	I1222 22:51:17.919431  146734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:51:17.936992  146734 main.go:144] libmachine: Using SSH client type: native
	I1222 22:51:17.937221  146734 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:51:17.937234  146734 main.go:144] libmachine: About to run SSH command:
	sudo hostname functional-384766 && echo "functional-384766" | sudo tee /etc/hostname
	I1222 22:51:18.086470  146734 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-384766
	
	I1222 22:51:18.086564  146734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:51:18.104748  146734 main.go:144] libmachine: Using SSH client type: native
	I1222 22:51:18.105051  146734 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:51:18.105077  146734 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-384766' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-384766/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-384766' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 22:51:18.246730  146734 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 22:51:18.246760  146734 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22301-72233/.minikube CaCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22301-72233/.minikube}
	I1222 22:51:18.246782  146734 ubuntu.go:190] setting up certificates
	I1222 22:51:18.246792  146734 provision.go:84] configureAuth start
	I1222 22:51:18.246854  146734 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-384766
	I1222 22:51:18.265782  146734 provision.go:143] copyHostCerts
	I1222 22:51:18.265828  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem
	I1222 22:51:18.265879  146734 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem, removing ...
	I1222 22:51:18.265900  146734 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem
	I1222 22:51:18.266005  146734 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem (1082 bytes)
	I1222 22:51:18.266139  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem
	I1222 22:51:18.266163  146734 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem, removing ...
	I1222 22:51:18.266175  146734 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem
	I1222 22:51:18.266220  146734 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem (1123 bytes)
	I1222 22:51:18.266317  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem
	I1222 22:51:18.266344  146734 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem, removing ...
	I1222 22:51:18.266355  146734 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem
	I1222 22:51:18.266400  146734 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem (1679 bytes)
	I1222 22:51:18.266499  146734 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem org=jenkins.functional-384766 san=[127.0.0.1 192.168.49.2 functional-384766 localhost minikube]
	I1222 22:51:18.330118  146734 provision.go:177] copyRemoteCerts
	I1222 22:51:18.330177  146734 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 22:51:18.330210  146734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:51:18.347420  146734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
	I1222 22:51:18.447556  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1222 22:51:18.447646  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 22:51:18.464129  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1222 22:51:18.464180  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 22:51:18.480702  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1222 22:51:18.480757  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1222 22:51:18.496998  146734 provision.go:87] duration metric: took 250.195084ms to configureAuth
	I1222 22:51:18.497021  146734 ubuntu.go:206] setting minikube options for container-runtime
	I1222 22:51:18.497168  146734 config.go:182] Loaded profile config "functional-384766": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1222 22:51:18.497218  146734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:51:18.514380  146734 main.go:144] libmachine: Using SSH client type: native
	I1222 22:51:18.514623  146734 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:51:18.514636  146734 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1222 22:51:18.655354  146734 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1222 22:51:18.655383  146734 ubuntu.go:71] root file system type: overlay
	I1222 22:51:18.655533  146734 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1222 22:51:18.655634  146734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:51:18.673540  146734 main.go:144] libmachine: Using SSH client type: native
	I1222 22:51:18.673819  146734 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:51:18.673915  146734 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1222 22:51:18.823487  146734 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1222 22:51:18.823601  146734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:51:18.841347  146734 main.go:144] libmachine: Using SSH client type: native
	I1222 22:51:18.841608  146734 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:51:18.841639  146734 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1222 22:51:18.987007  146734 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 22:51:18.987042  146734 machine.go:97] duration metric: took 1.225905804s to provisionDockerMachine
	I1222 22:51:18.987059  146734 start.go:293] postStartSetup for "functional-384766" (driver="docker")
	I1222 22:51:18.987075  146734 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 22:51:18.987145  146734 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 22:51:18.987199  146734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:51:19.006696  146734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
	I1222 22:51:19.107530  146734 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 22:51:19.110931  146734 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1222 22:51:19.110952  146734 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1222 22:51:19.110959  146734 command_runner.go:130] > VERSION_ID="12"
	I1222 22:51:19.110964  146734 command_runner.go:130] > VERSION="12 (bookworm)"
	I1222 22:51:19.110979  146734 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1222 22:51:19.110985  146734 command_runner.go:130] > ID=debian
	I1222 22:51:19.110992  146734 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1222 22:51:19.111000  146734 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1222 22:51:19.111012  146734 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1222 22:51:19.111100  146734 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 22:51:19.111124  146734 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 22:51:19.111137  146734 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-72233/.minikube/addons for local assets ...
	I1222 22:51:19.111205  146734 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-72233/.minikube/files for local assets ...
	I1222 22:51:19.111317  146734 filesync.go:149] local asset: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem -> 758032.pem in /etc/ssl/certs
	I1222 22:51:19.111330  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem -> /etc/ssl/certs/758032.pem
	I1222 22:51:19.111426  146734 filesync.go:149] local asset: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/test/nested/copy/75803/hosts -> hosts in /etc/test/nested/copy/75803
	I1222 22:51:19.111434  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/test/nested/copy/75803/hosts -> /etc/test/nested/copy/75803/hosts
	I1222 22:51:19.111495  146734 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/75803
	I1222 22:51:19.119122  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem --> /etc/ssl/certs/758032.pem (1708 bytes)
	I1222 22:51:19.135900  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/test/nested/copy/75803/hosts --> /etc/test/nested/copy/75803/hosts (40 bytes)
	I1222 22:51:19.152438  146734 start.go:296] duration metric: took 165.360222ms for postStartSetup
	I1222 22:51:19.152512  146734 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 22:51:19.152568  146734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:51:19.170181  146734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
	I1222 22:51:19.267525  146734 command_runner.go:130] > 37%
	I1222 22:51:19.267628  146734 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 22:51:19.272133  146734 command_runner.go:130] > 185G
	I1222 22:51:19.272164  146734 fix.go:56] duration metric: took 1.529618595s for fixHost
	I1222 22:51:19.272178  146734 start.go:83] releasing machines lock for "functional-384766", held for 1.529658247s
	I1222 22:51:19.272243  146734 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-384766
	I1222 22:51:19.290506  146734 ssh_runner.go:195] Run: cat /version.json
	I1222 22:51:19.290562  146734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:51:19.290583  146734 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 22:51:19.290685  146734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:51:19.307884  146734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
	I1222 22:51:19.308688  146734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
	I1222 22:51:19.461522  146734 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1222 22:51:19.463216  146734 command_runner.go:130] > {"iso_version": "v1.37.0-1766254259-22261", "kicbase_version": "v0.0.48-1766394456-22288", "minikube_version": "v1.37.0", "commit": "069cfc84263169a672fdad8d37486b5cb35673ac"}
	I1222 22:51:19.463366  146734 ssh_runner.go:195] Run: systemctl --version
	I1222 22:51:19.469697  146734 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1222 22:51:19.469761  146734 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1222 22:51:19.469847  146734 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1222 22:51:19.474292  146734 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1222 22:51:19.474367  146734 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 22:51:19.474416  146734 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 22:51:19.482031  146734 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1222 22:51:19.482056  146734 start.go:496] detecting cgroup driver to use...
	I1222 22:51:19.482091  146734 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 22:51:19.482215  146734 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 22:51:19.495227  146734 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1222 22:51:19.495298  146734 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1222 22:51:19.503438  146734 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1222 22:51:19.511525  146734 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1222 22:51:19.511574  146734 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1222 22:51:19.519676  146734 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 22:51:19.527517  146734 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1222 22:51:19.535615  146734 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 22:51:19.543569  146734 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 22:51:19.550965  146734 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1222 22:51:19.559037  146734 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1222 22:51:19.567079  146734 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1222 22:51:19.575222  146734 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 22:51:19.582102  146734 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1222 22:51:19.582154  146734 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 22:51:19.588882  146734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 22:51:19.668907  146734 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1222 22:51:19.740882  146734 start.go:496] detecting cgroup driver to use...
	I1222 22:51:19.740926  146734 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 22:51:19.740967  146734 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1222 22:51:19.753727  146734 command_runner.go:130] > # /lib/systemd/system/docker.service
	I1222 22:51:19.753762  146734 command_runner.go:130] > [Unit]
	I1222 22:51:19.753770  146734 command_runner.go:130] > Description=Docker Application Container Engine
	I1222 22:51:19.753778  146734 command_runner.go:130] > Documentation=https://docs.docker.com
	I1222 22:51:19.753787  146734 command_runner.go:130] > After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	I1222 22:51:19.753797  146734 command_runner.go:130] > Wants=network-online.target containerd.service
	I1222 22:51:19.753808  146734 command_runner.go:130] > Requires=docker.socket
	I1222 22:51:19.753815  146734 command_runner.go:130] > StartLimitBurst=3
	I1222 22:51:19.753825  146734 command_runner.go:130] > StartLimitIntervalSec=60
	I1222 22:51:19.753833  146734 command_runner.go:130] > [Service]
	I1222 22:51:19.753841  146734 command_runner.go:130] > Type=notify
	I1222 22:51:19.753848  146734 command_runner.go:130] > Restart=always
	I1222 22:51:19.753862  146734 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1222 22:51:19.753882  146734 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1222 22:51:19.753896  146734 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1222 22:51:19.753910  146734 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1222 22:51:19.753923  146734 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1222 22:51:19.753937  146734 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1222 22:51:19.753952  146734 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1222 22:51:19.753969  146734 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1222 22:51:19.753983  146734 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1222 22:51:19.753991  146734 command_runner.go:130] > ExecStart=
	I1222 22:51:19.754018  146734 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I1222 22:51:19.754031  146734 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1222 22:51:19.754046  146734 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1222 22:51:19.754060  146734 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1222 22:51:19.754067  146734 command_runner.go:130] > LimitNOFILE=infinity
	I1222 22:51:19.754076  146734 command_runner.go:130] > LimitNPROC=infinity
	I1222 22:51:19.754084  146734 command_runner.go:130] > LimitCORE=infinity
	I1222 22:51:19.754095  146734 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1222 22:51:19.754107  146734 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1222 22:51:19.754115  146734 command_runner.go:130] > TasksMax=infinity
	I1222 22:51:19.754124  146734 command_runner.go:130] > TimeoutStartSec=0
	I1222 22:51:19.754137  146734 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1222 22:51:19.754152  146734 command_runner.go:130] > Delegate=yes
	I1222 22:51:19.754162  146734 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1222 22:51:19.754171  146734 command_runner.go:130] > KillMode=process
	I1222 22:51:19.754179  146734 command_runner.go:130] > OOMScoreAdjust=-500
	I1222 22:51:19.754187  146734 command_runner.go:130] > [Install]
	I1222 22:51:19.754196  146734 command_runner.go:130] > WantedBy=multi-user.target
	I1222 22:51:19.754834  146734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 22:51:19.766639  146734 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1222 22:51:19.781290  146734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 22:51:19.792290  146734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1222 22:51:19.803490  146734 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 22:51:19.815697  146734 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1222 22:51:19.816642  146734 ssh_runner.go:195] Run: which cri-dockerd
	I1222 22:51:19.820210  146734 command_runner.go:130] > /usr/bin/cri-dockerd
	I1222 22:51:19.820315  146734 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1222 22:51:19.827693  146734 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1222 22:51:19.839649  146734 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1222 22:51:19.921176  146734 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1222 22:51:20.004043  146734 docker.go:578] configuring docker to use "cgroupfs" as cgroup driver...
	I1222 22:51:20.004160  146734 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1222 22:51:20.017007  146734 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1222 22:51:20.028524  146734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 22:51:20.107815  146734 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1222 22:51:20.801234  146734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 22:51:20.813428  146734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1222 22:51:20.824782  146734 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1222 22:51:20.839450  146734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1222 22:51:20.850829  146734 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1222 22:51:20.931099  146734 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1222 22:51:21.012149  146734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 22:51:21.092742  146734 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1222 22:51:21.120647  146734 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1222 22:51:21.132196  146734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 22:51:21.256485  146734 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1222 22:51:21.327564  146734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1222 22:51:21.340042  146734 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1222 22:51:21.340117  146734 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1222 22:51:21.343842  146734 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1222 22:51:21.343869  146734 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1222 22:51:21.343877  146734 command_runner.go:130] > Device: 0,75	Inode: 1744        Links: 1
	I1222 22:51:21.343888  146734 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  997/  docker)
	I1222 22:51:21.343895  146734 command_runner.go:130] > Access: 2025-12-22 22:51:21.266838495 +0000
	I1222 22:51:21.343909  146734 command_runner.go:130] > Modify: 2025-12-22 22:51:21.266838495 +0000
	I1222 22:51:21.343924  146734 command_runner.go:130] > Change: 2025-12-22 22:51:21.279839753 +0000
	I1222 22:51:21.343935  146734 command_runner.go:130] >  Birth: 2025-12-22 22:51:21.266838495 +0000
	I1222 22:51:21.343976  146734 start.go:564] Will wait 60s for crictl version
	I1222 22:51:21.344020  146734 ssh_runner.go:195] Run: which crictl
	I1222 22:51:21.347282  146734 command_runner.go:130] > /usr/local/bin/crictl
	I1222 22:51:21.347341  146734 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 22:51:21.370719  146734 command_runner.go:130] > Version:  0.1.0
	I1222 22:51:21.370739  146734 command_runner.go:130] > RuntimeName:  docker
	I1222 22:51:21.370743  146734 command_runner.go:130] > RuntimeVersion:  29.1.3
	I1222 22:51:21.370748  146734 command_runner.go:130] > RuntimeApiVersion:  v1
	I1222 22:51:21.370764  146734 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1222 22:51:21.370812  146734 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1222 22:51:21.395767  146734 command_runner.go:130] > 29.1.3
	I1222 22:51:21.395836  146734 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1222 22:51:21.418820  146734 command_runner.go:130] > 29.1.3
	I1222 22:51:21.422122  146734 out.go:252] * Preparing Kubernetes v1.35.0-rc.1 on Docker 29.1.3 ...
	I1222 22:51:21.422206  146734 cli_runner.go:164] Run: docker network inspect functional-384766 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 22:51:21.439338  146734 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1222 22:51:21.443526  146734 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1222 22:51:21.443628  146734 kubeadm.go:884] updating cluster {Name:functional-384766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-384766 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1222 22:51:21.443753  146734 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1222 22:51:21.443822  146734 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1222 22:51:21.464281  146734 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1222 22:51:21.464308  146734 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1222 22:51:21.464318  146734 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1222 22:51:21.464325  146734 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1222 22:51:21.464332  146734 command_runner.go:130] > registry.k8s.io/etcd:3.6.6-0
	I1222 22:51:21.464340  146734 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1222 22:51:21.464348  146734 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1222 22:51:21.464366  146734 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 22:51:21.464395  146734 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1222 22:51:21.464407  146734 docker.go:624] Images already preloaded, skipping extraction
	I1222 22:51:21.464455  146734 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1222 22:51:21.482666  146734 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1222 22:51:21.482684  146734 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1222 22:51:21.482690  146734 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1222 22:51:21.482697  146734 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1222 22:51:21.482704  146734 command_runner.go:130] > registry.k8s.io/etcd:3.6.6-0
	I1222 22:51:21.482712  146734 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1222 22:51:21.482729  146734 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1222 22:51:21.482739  146734 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 22:51:21.483998  146734 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1222 22:51:21.484022  146734 cache_images.go:86] Images are preloaded, skipping loading
	I1222 22:51:21.484036  146734 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 docker true true} ...
	I1222 22:51:21.484172  146734 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-384766 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-384766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 22:51:21.484238  146734 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1222 22:51:21.532066  146734 command_runner.go:130] > cgroupfs
	I1222 22:51:21.533783  146734 cni.go:84] Creating CNI manager for ""
	I1222 22:51:21.533808  146734 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1222 22:51:21.533825  146734 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 22:51:21.533845  146734 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-384766 NodeName:functional-384766 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 22:51:21.533961  146734 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-384766"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 22:51:21.534020  146734 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 22:51:21.542124  146734 command_runner.go:130] > kubeadm
	I1222 22:51:21.542141  146734 command_runner.go:130] > kubectl
	I1222 22:51:21.542144  146734 command_runner.go:130] > kubelet
	I1222 22:51:21.542165  146734 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 22:51:21.542214  146734 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 22:51:21.549624  146734 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1222 22:51:21.561393  146734 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 22:51:21.572932  146734 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I1222 22:51:21.584412  146734 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1222 22:51:21.587798  146734 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1222 22:51:21.587903  146734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 22:51:21.667778  146734 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 22:51:21.997732  146734 certs.go:69] Setting up /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766 for IP: 192.168.49.2
	I1222 22:51:21.997755  146734 certs.go:195] generating shared ca certs ...
	I1222 22:51:21.997774  146734 certs.go:227] acquiring lock for ca certs: {Name:mk952cc8302daab7c0050aedd5db4002f6808128 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 22:51:21.997942  146734 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.key
	I1222 22:51:21.998024  146734 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.key
	I1222 22:51:21.998042  146734 certs.go:257] generating profile certs ...
	I1222 22:51:21.998184  146734 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.key
	I1222 22:51:21.998247  146734 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/apiserver.key.c9e079a8
	I1222 22:51:21.998298  146734 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/proxy-client.key
	I1222 22:51:21.998317  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1222 22:51:21.998340  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1222 22:51:21.998365  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1222 22:51:21.998382  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1222 22:51:21.998399  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1222 22:51:21.998418  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1222 22:51:21.998436  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1222 22:51:21.998454  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1222 22:51:21.998527  146734 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803.pem (1338 bytes)
	W1222 22:51:21.998578  146734 certs.go:480] ignoring /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803_empty.pem, impossibly tiny 0 bytes
	I1222 22:51:21.998635  146734 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem (1675 bytes)
	I1222 22:51:21.998684  146734 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem (1082 bytes)
	I1222 22:51:21.998717  146734 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem (1123 bytes)
	I1222 22:51:21.998750  146734 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem (1679 bytes)
	I1222 22:51:21.998813  146734 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem (1708 bytes)
	I1222 22:51:21.998854  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem -> /usr/share/ca-certificates/758032.pem
	I1222 22:51:21.998877  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1222 22:51:21.998896  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803.pem -> /usr/share/ca-certificates/75803.pem
	I1222 22:51:21.999493  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 22:51:22.018141  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 22:51:22.036416  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 22:51:22.053080  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 22:51:22.069323  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 22:51:22.085369  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 22:51:22.101485  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 22:51:22.117634  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1222 22:51:22.133612  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem --> /usr/share/ca-certificates/758032.pem (1708 bytes)
	I1222 22:51:22.150125  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 22:51:22.166578  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803.pem --> /usr/share/ca-certificates/75803.pem (1338 bytes)
	I1222 22:51:22.182911  146734 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 22:51:22.194486  146734 ssh_runner.go:195] Run: openssl version
	I1222 22:51:22.199935  146734 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1222 22:51:22.200169  146734 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 22:51:22.206913  146734 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 22:51:22.213732  146734 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 22:51:22.217037  146734 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 22 22:33 /usr/share/ca-certificates/minikubeCA.pem
	I1222 22:51:22.217075  146734 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 22:33 /usr/share/ca-certificates/minikubeCA.pem
	I1222 22:51:22.217111  146734 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 22:51:22.249675  146734 command_runner.go:130] > b5213941
	I1222 22:51:22.250033  146734 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 22:51:22.257095  146734 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/75803.pem
	I1222 22:51:22.264071  146734 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/75803.pem /etc/ssl/certs/75803.pem
	I1222 22:51:22.271042  146734 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75803.pem
	I1222 22:51:22.274411  146734 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 22 22:42 /usr/share/ca-certificates/75803.pem
	I1222 22:51:22.274445  146734 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 22:42 /usr/share/ca-certificates/75803.pem
	I1222 22:51:22.274483  146734 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75803.pem
	I1222 22:51:22.307772  146734 command_runner.go:130] > 51391683
	I1222 22:51:22.308113  146734 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 22:51:22.315176  146734 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/758032.pem
	I1222 22:51:22.322196  146734 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/758032.pem /etc/ssl/certs/758032.pem
	I1222 22:51:22.329109  146734 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/758032.pem
	I1222 22:51:22.332667  146734 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 22 22:42 /usr/share/ca-certificates/758032.pem
	I1222 22:51:22.332691  146734 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 22:42 /usr/share/ca-certificates/758032.pem
	I1222 22:51:22.332732  146734 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/758032.pem
	I1222 22:51:22.365940  146734 command_runner.go:130] > 3ec20f2e
	I1222 22:51:22.366181  146734 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 22:51:22.373802  146734 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 22:51:22.377513  146734 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 22:51:22.377537  146734 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1222 22:51:22.377543  146734 command_runner.go:130] > Device: 8,1	Inode: 809094      Links: 1
	I1222 22:51:22.377550  146734 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1222 22:51:22.377558  146734 command_runner.go:130] > Access: 2025-12-22 22:47:15.370061162 +0000
	I1222 22:51:22.377566  146734 command_runner.go:130] > Modify: 2025-12-22 22:43:13.446668027 +0000
	I1222 22:51:22.377574  146734 command_runner.go:130] > Change: 2025-12-22 22:43:13.446668027 +0000
	I1222 22:51:22.377602  146734 command_runner.go:130] >  Birth: 2025-12-22 22:43:13.446668027 +0000
	I1222 22:51:22.377678  146734 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1222 22:51:22.411266  146734 command_runner.go:130] > Certificate will not expire
	I1222 22:51:22.411570  146734 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1222 22:51:22.445025  146734 command_runner.go:130] > Certificate will not expire
	I1222 22:51:22.445322  146734 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1222 22:51:22.479095  146734 command_runner.go:130] > Certificate will not expire
	I1222 22:51:22.479395  146734 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1222 22:51:22.512263  146734 command_runner.go:130] > Certificate will not expire
	I1222 22:51:22.512537  146734 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1222 22:51:22.545264  146734 command_runner.go:130] > Certificate will not expire
	I1222 22:51:22.545554  146734 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1222 22:51:22.578867  146734 command_runner.go:130] > Certificate will not expire
	I1222 22:51:22.579164  146734 kubeadm.go:401] StartCluster: {Name:functional-384766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-384766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 22:51:22.579364  146734 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1222 22:51:22.598061  146734 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 22:51:22.605833  146734 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1222 22:51:22.605851  146734 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1222 22:51:22.605860  146734 command_runner.go:130] > /var/lib/minikube/etcd:
	I1222 22:51:22.605880  146734 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1222 22:51:22.605891  146734 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1222 22:51:22.605932  146734 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1222 22:51:22.613011  146734 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1222 22:51:22.613379  146734 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-384766" does not appear in /home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1222 22:51:22.613493  146734 kubeconfig.go:62] /home/jenkins/minikube-integration/22301-72233/kubeconfig needs updating (will repair): [kubeconfig missing "functional-384766" cluster setting kubeconfig missing "functional-384766" context setting]
	I1222 22:51:22.613840  146734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/kubeconfig: {Name:mkabb5ea92c3fe748f610038efb5c58128364c71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 22:51:22.614238  146734 loader.go:405] Config loaded from file:  /home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1222 22:51:22.614401  146734 kapi.go:59] client config for functional-384766: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.crt", KeyFile:"/home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.key", CAFile:"/home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2765fe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1222 22:51:22.614887  146734 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1222 22:51:22.614906  146734 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1222 22:51:22.614915  146734 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=true
	I1222 22:51:22.614921  146734 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=true
	I1222 22:51:22.614926  146734 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=true
	I1222 22:51:22.614933  146734 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1222 22:51:22.614941  146734 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1222 22:51:22.615340  146734 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1222 22:51:22.622321  146734 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1222 22:51:22.622350  146734 kubeadm.go:602] duration metric: took 16.45181ms to restartPrimaryControlPlane
	I1222 22:51:22.622360  146734 kubeadm.go:403] duration metric: took 43.204719ms to StartCluster
	I1222 22:51:22.622376  146734 settings.go:142] acquiring lock: {Name:mk05aa406dacdbba79fec0b7e7f355491ea46bf8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 22:51:22.622430  146734 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1222 22:51:22.622875  146734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/kubeconfig: {Name:mkabb5ea92c3fe748f610038efb5c58128364c71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 22:51:22.623066  146734 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1222 22:51:22.623138  146734 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1222 22:51:22.623233  146734 addons.go:70] Setting storage-provisioner=true in profile "functional-384766"
	I1222 22:51:22.623261  146734 addons.go:239] Setting addon storage-provisioner=true in "functional-384766"
	I1222 22:51:22.623284  146734 config.go:182] Loaded profile config "functional-384766": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1222 22:51:22.623296  146734 host.go:66] Checking if "functional-384766" exists ...
	I1222 22:51:22.623288  146734 addons.go:70] Setting default-storageclass=true in profile "functional-384766"
	I1222 22:51:22.623322  146734 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-384766"
	I1222 22:51:22.623660  146734 cli_runner.go:164] Run: docker container inspect functional-384766 --format={{.State.Status}}
	I1222 22:51:22.623809  146734 cli_runner.go:164] Run: docker container inspect functional-384766 --format={{.State.Status}}
	I1222 22:51:22.624438  146734 out.go:179] * Verifying Kubernetes components...
	I1222 22:51:22.625531  146734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 22:51:22.644170  146734 loader.go:405] Config loaded from file:  /home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1222 22:51:22.644380  146734 kapi.go:59] client config for functional-384766: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.crt", KeyFile:"/home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.key", CAFile:"/home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2765fe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1222 22:51:22.644456  146734 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 22:51:22.644766  146734 addons.go:239] Setting addon default-storageclass=true in "functional-384766"
	I1222 22:51:22.644810  146734 host.go:66] Checking if "functional-384766" exists ...
	I1222 22:51:22.645336  146734 cli_runner.go:164] Run: docker container inspect functional-384766 --format={{.State.Status}}
	I1222 22:51:22.645513  146734 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:51:22.645531  146734 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1222 22:51:22.645584  146734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:51:22.667387  146734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
	I1222 22:51:22.668028  146734 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1222 22:51:22.668061  146734 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1222 22:51:22.668129  146734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:51:22.686127  146734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
	I1222 22:51:22.735817  146734 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 22:51:22.749391  146734 node_ready.go:35] waiting up to 6m0s for node "functional-384766" to be "Ready" ...
	I1222 22:51:22.749553  146734 type.go:165] "Request Body" body=""
	I1222 22:51:22.749681  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:22.749924  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:22.791529  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:51:22.791702  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:51:22.858228  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:22.858293  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:22.858334  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:22.858349  146734 retry.go:84] will retry after 300ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:22.860247  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:23.114793  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:51:23.124266  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:51:23.170075  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:23.170134  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:23.179073  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:23.179145  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:23.250305  146734 type.go:165] "Request Body" body=""
	I1222 22:51:23.250418  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:23.250774  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:23.384101  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:51:23.434813  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:23.434866  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:23.600155  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:51:23.651352  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:23.651412  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:23.749655  146734 type.go:165] "Request Body" body=""
	I1222 22:51:23.749735  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:23.750072  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:23.901355  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:51:23.952200  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:23.952267  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:24.239666  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:51:24.250121  146734 type.go:165] "Request Body" body=""
	I1222 22:51:24.250189  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:24.250430  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:24.294448  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:24.294492  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:24.750059  146734 type.go:165] "Request Body" body=""
	I1222 22:51:24.750149  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:24.750512  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:24.750582  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:24.937883  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:51:24.989534  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:24.989576  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:25.250004  146734 type.go:165] "Request Body" body=""
	I1222 22:51:25.250083  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:25.250431  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:25.372773  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:51:25.425171  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:25.425216  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:25.749629  146734 type.go:165] "Request Body" body=""
	I1222 22:51:25.749702  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:25.750010  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:26.170572  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:51:26.222069  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:26.222131  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:26.250327  146734 type.go:165] "Request Body" body=""
	I1222 22:51:26.250414  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:26.250759  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:26.440137  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:51:26.491948  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:26.492006  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:26.750438  146734 type.go:165] "Request Body" body=""
	I1222 22:51:26.750538  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:26.750885  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:26.750943  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:27.250541  146734 type.go:165] "Request Body" body=""
	I1222 22:51:27.250646  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:27.250950  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:27.355175  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:51:27.403566  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:27.406149  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:27.749989  146734 type.go:165] "Request Body" body=""
	I1222 22:51:27.750066  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:27.750396  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:28.250002  146734 type.go:165] "Request Body" body=""
	I1222 22:51:28.250075  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:28.250397  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:28.438810  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:51:28.487114  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:28.489616  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:28.750061  146734 type.go:165] "Request Body" body=""
	I1222 22:51:28.750134  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:28.750419  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:29.250032  146734 type.go:165] "Request Body" body=""
	I1222 22:51:29.250106  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:29.250445  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:29.250522  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:29.750041  146734 type.go:165] "Request Body" body=""
	I1222 22:51:29.750138  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:29.750509  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:30.249736  146734 type.go:165] "Request Body" body=""
	I1222 22:51:30.249807  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:30.250111  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:30.636760  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:51:30.689934  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:30.689988  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:30.750216  146734 type.go:165] "Request Body" body=""
	I1222 22:51:30.750316  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:30.750667  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:31.250328  146734 type.go:165] "Request Body" body=""
	I1222 22:51:31.250434  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:31.250799  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:31.250876  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:31.750450  146734 type.go:165] "Request Body" body=""
	I1222 22:51:31.750530  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:31.750869  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:32.250483  146734 type.go:165] "Request Body" body=""
	I1222 22:51:32.250580  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:32.250950  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:32.711876  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:51:32.750368  146734 type.go:165] "Request Body" body=""
	I1222 22:51:32.750445  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:32.750774  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:32.760899  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:32.763771  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:33.250469  146734 type.go:165] "Request Body" body=""
	I1222 22:51:33.250543  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:33.250856  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:33.250917  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:33.406152  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:51:33.457687  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:33.457745  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:33.750192  146734 type.go:165] "Request Body" body=""
	I1222 22:51:33.750291  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:33.750643  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:34.250274  146734 type.go:165] "Request Body" body=""
	I1222 22:51:34.250352  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:34.250676  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:34.749812  146734 type.go:165] "Request Body" body=""
	I1222 22:51:34.749877  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:34.750166  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:35.249755  146734 type.go:165] "Request Body" body=""
	I1222 22:51:35.249850  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:35.250178  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:35.516575  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:51:35.570400  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:35.570450  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:35.749757  146734 type.go:165] "Request Body" body=""
	I1222 22:51:35.749831  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:35.750172  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:35.750238  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:36.249789  146734 type.go:165] "Request Body" body=""
	I1222 22:51:36.249888  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:36.250250  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:36.749817  146734 type.go:165] "Request Body" body=""
	I1222 22:51:36.749889  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:36.750217  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:37.249837  146734 type.go:165] "Request Body" body=""
	I1222 22:51:37.249921  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:37.250262  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:37.750101  146734 type.go:165] "Request Body" body=""
	I1222 22:51:37.750202  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:37.750527  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:37.750609  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:38.250259  146734 type.go:165] "Request Body" body=""
	I1222 22:51:38.250333  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:38.250692  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:38.358924  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:51:38.409955  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:38.410034  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:38.750557  146734 type.go:165] "Request Body" body=""
	I1222 22:51:38.750654  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:38.750998  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:39.249528  146734 type.go:165] "Request Body" body=""
	I1222 22:51:39.249647  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:39.249920  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:39.749563  146734 type.go:165] "Request Body" body=""
	I1222 22:51:39.749697  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:39.750029  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:40.249635  146734 type.go:165] "Request Body" body=""
	I1222 22:51:40.249710  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:40.250037  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:40.250107  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:40.749663  146734 type.go:165] "Request Body" body=""
	I1222 22:51:40.749734  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:40.750058  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:41.249687  146734 type.go:165] "Request Body" body=""
	I1222 22:51:41.249766  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:41.250113  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:41.749718  146734 type.go:165] "Request Body" body=""
	I1222 22:51:41.749834  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:41.750194  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:42.249756  146734 type.go:165] "Request Body" body=""
	I1222 22:51:42.249861  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:42.250194  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:42.250268  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:42.750151  146734 type.go:165] "Request Body" body=""
	I1222 22:51:42.750272  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:42.750674  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:43.250325  146734 type.go:165] "Request Body" body=""
	I1222 22:51:43.250412  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:43.250779  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:43.750422  146734 type.go:165] "Request Body" body=""
	I1222 22:51:43.750532  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:43.750837  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:44.250504  146734 type.go:165] "Request Body" body=""
	I1222 22:51:44.250574  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:44.250924  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:44.250995  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:44.670446  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:51:44.719419  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:44.722302  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:44.722343  146734 retry.go:84] will retry after 11.9s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:44.750527  146734 type.go:165] "Request Body" body=""
	I1222 22:51:44.750632  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:44.750954  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:45.250633  146734 type.go:165] "Request Body" body=""
	I1222 22:51:45.250718  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:45.251044  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:45.749665  146734 type.go:165] "Request Body" body=""
	I1222 22:51:45.749749  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:45.750104  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:46.249725  146734 type.go:165] "Request Body" body=""
	I1222 22:51:46.249806  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:46.250136  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:46.749687  146734 type.go:165] "Request Body" body=""
	I1222 22:51:46.749756  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:46.750050  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:46.750108  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:47.249647  146734 type.go:165] "Request Body" body=""
	I1222 22:51:47.249753  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:47.250081  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:47.750272  146734 type.go:165] "Request Body" body=""
	I1222 22:51:47.750344  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:47.750626  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:48.250351  146734 type.go:165] "Request Body" body=""
	I1222 22:51:48.250445  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:48.250816  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:48.750455  146734 type.go:165] "Request Body" body=""
	I1222 22:51:48.750540  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:48.750902  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:48.750964  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:49.250555  146734 type.go:165] "Request Body" body=""
	I1222 22:51:49.250653  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:49.250985  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:49.750603  146734 type.go:165] "Request Body" body=""
	I1222 22:51:49.750681  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:49.750999  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:50.249551  146734 type.go:165] "Request Body" body=""
	I1222 22:51:50.249641  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:50.249968  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:50.750576  146734 type.go:165] "Request Body" body=""
	I1222 22:51:50.750686  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:50.751008  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:50.751079  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:51.249557  146734 type.go:165] "Request Body" body=""
	I1222 22:51:51.249656  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:51.249983  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:51.749684  146734 type.go:165] "Request Body" body=""
	I1222 22:51:51.749763  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:51.750094  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:52.249698  146734 type.go:165] "Request Body" body=""
	I1222 22:51:52.249792  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:52.250118  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:52.572783  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:51:52.624461  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:52.624509  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:52.624539  146734 retry.go:84] will retry after 8.9s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:52.749682  146734 type.go:165] "Request Body" body=""
	I1222 22:51:52.749751  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:52.750065  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:53.249705  146734 type.go:165] "Request Body" body=""
	I1222 22:51:53.249792  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:53.250136  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:53.250202  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:53.749743  146734 type.go:165] "Request Body" body=""
	I1222 22:51:53.749827  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:53.750165  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:54.249818  146734 type.go:165] "Request Body" body=""
	I1222 22:51:54.249922  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:54.250320  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:54.749879  146734 type.go:165] "Request Body" body=""
	I1222 22:51:54.749983  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:54.750325  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:55.249732  146734 type.go:165] "Request Body" body=""
	I1222 22:51:55.249811  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:55.250186  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:55.250256  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:55.749720  146734 type.go:165] "Request Body" body=""
	I1222 22:51:55.749797  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:55.750101  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:56.249700  146734 type.go:165] "Request Body" body=""
	I1222 22:51:56.249777  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:56.250119  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:56.630698  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:51:56.682682  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:56.682728  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:56.682750  146734 retry.go:84] will retry after 19.1s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:56.749962  146734 type.go:165] "Request Body" body=""
	I1222 22:51:56.750043  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:56.750390  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:57.249782  146734 type.go:165] "Request Body" body=""
	I1222 22:51:57.249859  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:57.250169  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:57.750032  146734 type.go:165] "Request Body" body=""
	I1222 22:51:57.750112  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:57.750459  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:57.750526  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:58.250052  146734 type.go:165] "Request Body" body=""
	I1222 22:51:58.250129  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:58.250484  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:58.750074  146734 type.go:165] "Request Body" body=""
	I1222 22:51:58.750164  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:58.750559  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:59.250376  146734 type.go:165] "Request Body" body=""
	I1222 22:51:59.250455  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:59.250821  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:59.750547  146734 type.go:165] "Request Body" body=""
	I1222 22:51:59.750668  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:59.751053  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:59.751124  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:00.249679  146734 type.go:165] "Request Body" body=""
	I1222 22:52:00.249756  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:00.250124  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:00.749766  146734 type.go:165] "Request Body" body=""
	I1222 22:52:00.749870  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:00.750200  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:01.249805  146734 type.go:165] "Request Body" body=""
	I1222 22:52:01.249878  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:01.250214  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:01.555677  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:52:01.608817  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:52:01.608873  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:52:01.608898  146734 retry.go:84] will retry after 11.1s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:52:01.750139  146734 type.go:165] "Request Body" body=""
	I1222 22:52:01.750232  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:01.750541  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:02.250350  146734 type.go:165] "Request Body" body=""
	I1222 22:52:02.250446  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:02.250884  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:02.250959  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:02.749991  146734 type.go:165] "Request Body" body=""
	I1222 22:52:02.750087  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:02.750489  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:03.249785  146734 type.go:165] "Request Body" body=""
	I1222 22:52:03.249886  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:03.250222  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:03.749863  146734 type.go:165] "Request Body" body=""
	I1222 22:52:03.749953  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:03.750330  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:04.249910  146734 type.go:165] "Request Body" body=""
	I1222 22:52:04.249991  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:04.250323  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:04.749787  146734 type.go:165] "Request Body" body=""
	I1222 22:52:04.749878  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:04.750255  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:04.750328  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:05.249805  146734 type.go:165] "Request Body" body=""
	I1222 22:52:05.249881  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:05.250215  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:05.749829  146734 type.go:165] "Request Body" body=""
	I1222 22:52:05.749905  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:05.750236  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:06.249768  146734 type.go:165] "Request Body" body=""
	I1222 22:52:06.249853  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:06.250166  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:06.749813  146734 type.go:165] "Request Body" body=""
	I1222 22:52:06.749962  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:06.750292  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:06.750353  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:07.249913  146734 type.go:165] "Request Body" body=""
	I1222 22:52:07.249997  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:07.250350  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:07.750157  146734 type.go:165] "Request Body" body=""
	I1222 22:52:07.750249  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:07.750625  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:08.250269  146734 type.go:165] "Request Body" body=""
	I1222 22:52:08.250349  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:08.250699  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:08.750338  146734 type.go:165] "Request Body" body=""
	I1222 22:52:08.750417  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:08.750817  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:08.750880  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:09.250447  146734 type.go:165] "Request Body" body=""
	I1222 22:52:09.250517  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:09.250886  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:09.750542  146734 type.go:165] "Request Body" body=""
	I1222 22:52:09.750651  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:09.751017  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:10.249569  146734 type.go:165] "Request Body" body=""
	I1222 22:52:10.249667  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:10.250007  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:10.749614  146734 type.go:165] "Request Body" body=""
	I1222 22:52:10.749698  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:10.749986  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:11.249644  146734 type.go:165] "Request Body" body=""
	I1222 22:52:11.249721  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:11.250050  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:11.250115  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:11.749702  146734 type.go:165] "Request Body" body=""
	I1222 22:52:11.749781  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:11.750159  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:12.250570  146734 type.go:165] "Request Body" body=""
	I1222 22:52:12.250676  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:12.251019  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:12.749204  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:52:12.749953  146734 type.go:165] "Request Body" body=""
	I1222 22:52:12.750037  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:12.750364  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:12.803295  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:52:12.803361  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:52:12.803388  146734 retry.go:84] will retry after 41s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:52:13.249864  146734 type.go:165] "Request Body" body=""
	I1222 22:52:13.249961  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:13.250341  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:13.250413  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:13.749947  146734 type.go:165] "Request Body" body=""
	I1222 22:52:13.750050  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:13.750385  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:14.249969  146734 type.go:165] "Request Body" body=""
	I1222 22:52:14.250047  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:14.250429  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:14.750035  146734 type.go:165] "Request Body" body=""
	I1222 22:52:14.750108  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:14.750442  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:15.249717  146734 type.go:165] "Request Body" body=""
	I1222 22:52:15.249827  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:15.250182  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:15.749736  146734 type.go:165] "Request Body" body=""
	I1222 22:52:15.749817  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:15.750176  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:15.750272  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:15.781356  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:52:15.834579  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:52:15.834644  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:52:15.834678  146734 retry.go:84] will retry after 22s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:52:16.250185  146734 type.go:165] "Request Body" body=""
	I1222 22:52:16.250285  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:16.250641  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:16.750300  146734 type.go:165] "Request Body" body=""
	I1222 22:52:16.750391  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:16.750749  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:17.250375  146734 type.go:165] "Request Body" body=""
	I1222 22:52:17.250470  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:17.250796  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:17.750663  146734 type.go:165] "Request Body" body=""
	I1222 22:52:17.750770  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:17.751155  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:17.751219  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:18.249705  146734 type.go:165] "Request Body" body=""
	I1222 22:52:18.249774  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:18.250099  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:18.749719  146734 type.go:165] "Request Body" body=""
	I1222 22:52:18.749823  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:18.750127  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:19.249772  146734 type.go:165] "Request Body" body=""
	I1222 22:52:19.249846  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:19.250172  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:19.749726  146734 type.go:165] "Request Body" body=""
	I1222 22:52:19.749805  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:19.750128  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:20.249691  146734 type.go:165] "Request Body" body=""
	I1222 22:52:20.249767  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:20.250123  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:20.250193  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:20.749739  146734 type.go:165] "Request Body" body=""
	I1222 22:52:20.749820  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:20.750153  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:21.249730  146734 type.go:165] "Request Body" body=""
	I1222 22:52:21.249821  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:21.250178  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:21.749804  146734 type.go:165] "Request Body" body=""
	I1222 22:52:21.749892  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:21.750232  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:22.249806  146734 type.go:165] "Request Body" body=""
	I1222 22:52:22.249886  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:22.250237  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:22.250298  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:22.750142  146734 type.go:165] "Request Body" body=""
	I1222 22:52:22.750215  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:22.750516  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:23.250222  146734 type.go:165] "Request Body" body=""
	I1222 22:52:23.250332  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:23.250708  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:23.750438  146734 type.go:165] "Request Body" body=""
	I1222 22:52:23.750532  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:23.750972  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:24.249568  146734 type.go:165] "Request Body" body=""
	I1222 22:52:24.249660  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:24.249969  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:24.749566  146734 type.go:165] "Request Body" body=""
	I1222 22:52:24.749670  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:24.750007  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:24.750078  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:25.249560  146734 type.go:165] "Request Body" body=""
	I1222 22:52:25.249668  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:25.250009  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:25.749615  146734 type.go:165] "Request Body" body=""
	I1222 22:52:25.749711  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:25.750086  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:26.249720  146734 type.go:165] "Request Body" body=""
	I1222 22:52:26.249804  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:26.250176  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:26.749791  146734 type.go:165] "Request Body" body=""
	I1222 22:52:26.749896  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:26.750250  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:26.750329  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:27.249744  146734 type.go:165] "Request Body" body=""
	I1222 22:52:27.249822  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:27.250119  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:27.749959  146734 type.go:165] "Request Body" body=""
	I1222 22:52:27.750049  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:27.750408  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:28.249981  146734 type.go:165] "Request Body" body=""
	I1222 22:52:28.250077  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:28.250414  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:28.749755  146734 type.go:165] "Request Body" body=""
	I1222 22:52:28.749827  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:28.750148  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:29.249816  146734 type.go:165] "Request Body" body=""
	I1222 22:52:29.249914  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:29.250248  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:29.250329  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:29.749820  146734 type.go:165] "Request Body" body=""
	I1222 22:52:29.749892  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:29.750206  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:30.249734  146734 type.go:165] "Request Body" body=""
	I1222 22:52:30.249842  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:30.250163  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:30.749684  146734 type.go:165] "Request Body" body=""
	I1222 22:52:30.749763  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:30.750086  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:31.249714  146734 type.go:165] "Request Body" body=""
	I1222 22:52:31.249787  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:31.250100  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:31.749725  146734 type.go:165] "Request Body" body=""
	I1222 22:52:31.749812  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:31.750152  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:31.750223  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:32.249759  146734 type.go:165] "Request Body" body=""
	I1222 22:52:32.249872  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:32.250213  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:32.750284  146734 type.go:165] "Request Body" body=""
	I1222 22:52:32.750382  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:32.750808  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:33.250484  146734 type.go:165] "Request Body" body=""
	I1222 22:52:33.250553  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:33.250856  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:33.750582  146734 type.go:165] "Request Body" body=""
	I1222 22:52:33.750682  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:33.751084  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:33.751151  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:34.249678  146734 type.go:165] "Request Body" body=""
	I1222 22:52:34.249771  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:34.250118  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:34.749750  146734 type.go:165] "Request Body" body=""
	I1222 22:52:34.749834  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:34.750178  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:35.249794  146734 type.go:165] "Request Body" body=""
	I1222 22:52:35.249866  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:35.250195  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:35.749858  146734 type.go:165] "Request Body" body=""
	I1222 22:52:35.749938  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:35.750250  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:36.249945  146734 type.go:165] "Request Body" body=""
	I1222 22:52:36.250030  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:36.250372  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:36.250452  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:36.750049  146734 type.go:165] "Request Body" body=""
	I1222 22:52:36.750122  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:36.750536  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:37.250238  146734 type.go:165] "Request Body" body=""
	I1222 22:52:37.250338  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:37.250710  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:37.750607  146734 type.go:165] "Request Body" body=""
	I1222 22:52:37.750693  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:37.751037  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:37.791261  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:52:37.841791  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:52:37.841848  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:52:37.841882  146734 retry.go:84] will retry after 24.9s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:52:38.250412  146734 type.go:165] "Request Body" body=""
	I1222 22:52:38.250501  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:38.250856  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:38.250927  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:38.750539  146734 type.go:165] "Request Body" body=""
	I1222 22:52:38.750640  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:38.750989  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:39.250534  146734 type.go:165] "Request Body" body=""
	I1222 22:52:39.250619  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:39.250903  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:39.750622  146734 type.go:165] "Request Body" body=""
	I1222 22:52:39.750769  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:39.751121  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:40.249717  146734 type.go:165] "Request Body" body=""
	I1222 22:52:40.249817  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:40.250172  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:40.749725  146734 type.go:165] "Request Body" body=""
	I1222 22:52:40.749800  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:40.750112  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:40.750183  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:41.249733  146734 type.go:165] "Request Body" body=""
	I1222 22:52:41.249816  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:41.250159  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:41.749758  146734 type.go:165] "Request Body" body=""
	I1222 22:52:41.749830  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:41.750172  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:42.249740  146734 type.go:165] "Request Body" body=""
	I1222 22:52:42.249820  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:42.250217  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:42.750262  146734 type.go:165] "Request Body" body=""
	I1222 22:52:42.750373  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:42.750720  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:42.750785  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:43.250407  146734 type.go:165] "Request Body" body=""
	I1222 22:52:43.250502  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:43.250877  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:43.750507  146734 type.go:165] "Request Body" body=""
	I1222 22:52:43.750607  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:43.750955  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:44.250608  146734 type.go:165] "Request Body" body=""
	I1222 22:52:44.250729  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:44.251071  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:44.749661  146734 type.go:165] "Request Body" body=""
	I1222 22:52:44.749764  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:44.750070  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:45.249668  146734 type.go:165] "Request Body" body=""
	I1222 22:52:45.249750  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:45.250041  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:45.250109  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:45.749753  146734 type.go:165] "Request Body" body=""
	I1222 22:52:45.749827  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:45.750181  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:46.249757  146734 type.go:165] "Request Body" body=""
	I1222 22:52:46.249830  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:46.250177  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:46.749749  146734 type.go:165] "Request Body" body=""
	I1222 22:52:46.749824  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:46.750208  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:47.249770  146734 type.go:165] "Request Body" body=""
	I1222 22:52:47.249839  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:47.250180  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:47.250253  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:47.750010  146734 type.go:165] "Request Body" body=""
	I1222 22:52:47.750110  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:47.750420  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:48.249698  146734 type.go:165] "Request Body" body=""
	I1222 22:52:48.249771  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:48.250079  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:48.750048  146734 type.go:165] "Request Body" body=""
	I1222 22:52:48.750143  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:48.750506  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:49.250090  146734 type.go:165] "Request Body" body=""
	I1222 22:52:49.250180  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:49.250514  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:49.250621  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:49.750101  146734 type.go:165] "Request Body" body=""
	I1222 22:52:49.750203  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:49.750541  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:50.250324  146734 type.go:165] "Request Body" body=""
	I1222 22:52:50.250405  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:50.250760  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:50.750452  146734 type.go:165] "Request Body" body=""
	I1222 22:52:50.750541  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:50.750912  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:51.249606  146734 type.go:165] "Request Body" body=""
	I1222 22:52:51.249697  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:51.250034  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:51.749707  146734 type.go:165] "Request Body" body=""
	I1222 22:52:51.749808  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:51.750156  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:51.750217  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:52.249714  146734 type.go:165] "Request Body" body=""
	I1222 22:52:52.249790  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:52.250101  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:52.750121  146734 type.go:165] "Request Body" body=""
	I1222 22:52:52.750209  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:52.750561  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:53.250207  146734 type.go:165] "Request Body" body=""
	I1222 22:52:53.250279  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:53.250649  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:53.750285  146734 type.go:165] "Request Body" body=""
	I1222 22:52:53.750382  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:53.750757  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:53.750818  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:53.825965  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:52:53.875648  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:52:53.878317  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:52:53.878441  146734 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1222 22:52:54.249881  146734 type.go:165] "Request Body" body=""
	I1222 22:52:54.249965  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:54.250291  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:54.749880  146734 type.go:165] "Request Body" body=""
	I1222 22:52:54.749992  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:54.750339  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:55.249961  146734 type.go:165] "Request Body" body=""
	I1222 22:52:55.250051  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:55.250408  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:55.750006  146734 type.go:165] "Request Body" body=""
	I1222 22:52:55.750102  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:55.750525  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:56.250142  146734 type.go:165] "Request Body" body=""
	I1222 22:52:56.250214  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:56.250523  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:56.250588  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:56.750239  146734 type.go:165] "Request Body" body=""
	I1222 22:52:56.750323  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:56.750714  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:57.250354  146734 type.go:165] "Request Body" body=""
	I1222 22:52:57.250424  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:57.250804  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:57.750628  146734 type.go:165] "Request Body" body=""
	I1222 22:52:57.750717  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:57.751065  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:58.249685  146734 type.go:165] "Request Body" body=""
	I1222 22:52:58.249757  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:58.250088  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:58.749668  146734 type.go:165] "Request Body" body=""
	I1222 22:52:58.749748  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:58.750061  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:58.750137  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:59.249726  146734 type.go:165] "Request Body" body=""
	I1222 22:52:59.249899  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:59.250271  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:59.749844  146734 type.go:165] "Request Body" body=""
	I1222 22:52:59.749922  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:59.750248  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:00.249741  146734 type.go:165] "Request Body" body=""
	I1222 22:53:00.249825  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:00.250192  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:00.749724  146734 type.go:165] "Request Body" body=""
	I1222 22:53:00.749826  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:00.750162  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:00.750221  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:01.249640  146734 type.go:165] "Request Body" body=""
	I1222 22:53:01.249734  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:01.250068  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:01.749640  146734 type.go:165] "Request Body" body=""
	I1222 22:53:01.749713  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:01.749993  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:02.249646  146734 type.go:165] "Request Body" body=""
	I1222 22:53:02.249726  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:02.250075  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:02.750082  146734 type.go:165] "Request Body" body=""
	I1222 22:53:02.750162  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:02.750495  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:02.750554  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:02.761644  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:53:02.811523  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:53:02.814145  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:53:02.814242  146734 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1222 22:53:02.815929  146734 out.go:179] * Enabled addons: 
	I1222 22:53:02.817068  146734 addons.go:530] duration metric: took 1m40.193946362s for enable addons: enabled=[]
	I1222 22:53:03.249698  146734 type.go:165] "Request Body" body=""
	I1222 22:53:03.249788  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:03.250104  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:03.749742  146734 type.go:165] "Request Body" body=""
	I1222 22:53:03.749825  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:03.750182  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:04.249738  146734 type.go:165] "Request Body" body=""
	I1222 22:53:04.249809  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:04.250142  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:04.749826  146734 type.go:165] "Request Body" body=""
	I1222 22:53:04.749903  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:04.750163  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:05.249723  146734 type.go:165] "Request Body" body=""
	I1222 22:53:05.249795  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:05.250136  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:05.250198  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:05.749730  146734 type.go:165] "Request Body" body=""
	I1222 22:53:05.749806  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:05.750146  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:06.249727  146734 type.go:165] "Request Body" body=""
	I1222 22:53:06.249793  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:06.250084  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:06.749693  146734 type.go:165] "Request Body" body=""
	I1222 22:53:06.749808  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:06.750149  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:07.249718  146734 type.go:165] "Request Body" body=""
	I1222 22:53:07.249825  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:07.250157  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:07.250228  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:07.749972  146734 type.go:165] "Request Body" body=""
	I1222 22:53:07.750046  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:07.750368  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:08.249951  146734 type.go:165] "Request Body" body=""
	I1222 22:53:08.250025  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:08.250370  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:08.750009  146734 type.go:165] "Request Body" body=""
	I1222 22:53:08.750097  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:08.750447  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:09.250029  146734 type.go:165] "Request Body" body=""
	I1222 22:53:09.250111  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:09.250445  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:09.250517  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:09.750056  146734 type.go:165] "Request Body" body=""
	I1222 22:53:09.750146  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:09.750490  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:10.250067  146734 type.go:165] "Request Body" body=""
	I1222 22:53:10.250142  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:10.250503  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:10.749730  146734 type.go:165] "Request Body" body=""
	I1222 22:53:10.749826  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:10.750162  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:11.249711  146734 type.go:165] "Request Body" body=""
	I1222 22:53:11.249778  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:11.250102  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:11.749714  146734 type.go:165] "Request Body" body=""
	I1222 22:53:11.749820  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:11.750162  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:11.750229  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:12.249740  146734 type.go:165] "Request Body" body=""
	I1222 22:53:12.249829  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:12.250202  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:12.750297  146734 type.go:165] "Request Body" body=""
	I1222 22:53:12.750400  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:12.750823  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:13.250490  146734 type.go:165] "Request Body" body=""
	I1222 22:53:13.250610  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:13.250943  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:13.750581  146734 type.go:165] "Request Body" body=""
	I1222 22:53:13.750675  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:13.751030  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:13.751101  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:14.249749  146734 type.go:165] "Request Body" body=""
	I1222 22:53:14.249848  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:14.250187  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:14.749731  146734 type.go:165] "Request Body" body=""
	I1222 22:53:14.749805  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:14.750129  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:15.249670  146734 type.go:165] "Request Body" body=""
	I1222 22:53:15.249753  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:15.250047  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:15.749620  146734 type.go:165] "Request Body" body=""
	I1222 22:53:15.749692  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:15.750012  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:16.249630  146734 type.go:165] "Request Body" body=""
	I1222 22:53:16.249712  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:16.250033  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:16.250099  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:16.749569  146734 type.go:165] "Request Body" body=""
	I1222 22:53:16.749652  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:16.750014  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:17.249620  146734 type.go:165] "Request Body" body=""
	I1222 22:53:17.249709  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:17.250072  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:17.749929  146734 type.go:165] "Request Body" body=""
	I1222 22:53:17.750004  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:17.750365  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:18.249926  146734 type.go:165] "Request Body" body=""
	I1222 22:53:18.250000  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:18.250371  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:18.250470  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:18.749909  146734 type.go:165] "Request Body" body=""
	I1222 22:53:18.749982  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:18.750366  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:19.249922  146734 type.go:165] "Request Body" body=""
	I1222 22:53:19.250017  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:19.250357  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:19.750009  146734 type.go:165] "Request Body" body=""
	I1222 22:53:19.750311  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:19.750720  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:20.249710  146734 type.go:165] "Request Body" body=""
	I1222 22:53:20.249792  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:20.250138  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:20.749729  146734 type.go:165] "Request Body" body=""
	I1222 22:53:20.749802  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:20.750122  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:20.750181  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:21.249727  146734 type.go:165] "Request Body" body=""
	I1222 22:53:21.249810  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:21.250104  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:21.749698  146734 type.go:165] "Request Body" body=""
	I1222 22:53:21.749771  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:21.750100  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:22.249641  146734 type.go:165] "Request Body" body=""
	I1222 22:53:22.249720  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:22.250039  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:22.750075  146734 type.go:165] "Request Body" body=""
	I1222 22:53:22.750156  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:22.750471  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:22.750535  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:23.250068  146734 type.go:165] "Request Body" body=""
	I1222 22:53:23.250145  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:23.250478  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:23.750036  146734 type.go:165] "Request Body" body=""
	I1222 22:53:23.750110  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:23.750437  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:24.250017  146734 type.go:165] "Request Body" body=""
	I1222 22:53:24.250093  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:24.250476  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:24.750180  146734 type.go:165] "Request Body" body=""
	I1222 22:53:24.750257  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:24.750588  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:24.750677  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:25.250225  146734 type.go:165] "Request Body" body=""
	I1222 22:53:25.250324  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:25.250676  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:25.749787  146734 type.go:165] "Request Body" body=""
	I1222 22:53:25.749860  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:25.750164  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:26.249730  146734 type.go:165] "Request Body" body=""
	I1222 22:53:26.249811  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:26.250140  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:26.749761  146734 type.go:165] "Request Body" body=""
	I1222 22:53:26.749838  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:26.750123  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:27.249731  146734 type.go:165] "Request Body" body=""
	I1222 22:53:27.249803  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:27.250133  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:27.250212  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:27.749969  146734 type.go:165] "Request Body" body=""
	I1222 22:53:27.750075  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:27.750451  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:28.250072  146734 type.go:165] "Request Body" body=""
	I1222 22:53:28.250148  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:28.250489  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:28.749678  146734 type.go:165] "Request Body" body=""
	I1222 22:53:28.749774  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:28.750036  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:29.249723  146734 type.go:165] "Request Body" body=""
	I1222 22:53:29.249804  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:29.250123  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:29.749729  146734 type.go:165] "Request Body" body=""
	I1222 22:53:29.749802  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:29.750160  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:29.750223  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:30.249702  146734 type.go:165] "Request Body" body=""
	I1222 22:53:30.249782  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:30.250134  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:30.749711  146734 type.go:165] "Request Body" body=""
	I1222 22:53:30.749794  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:30.750154  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:31.249754  146734 type.go:165] "Request Body" body=""
	I1222 22:53:31.249836  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:31.250160  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:31.749713  146734 type.go:165] "Request Body" body=""
	I1222 22:53:31.749788  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:31.750082  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:32.249744  146734 type.go:165] "Request Body" body=""
	I1222 22:53:32.249825  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:32.250166  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:32.250234  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:32.750188  146734 type.go:165] "Request Body" body=""
	I1222 22:53:32.750294  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:32.750695  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:33.250318  146734 type.go:165] "Request Body" body=""
	I1222 22:53:33.250417  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:33.250846  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:33.750507  146734 type.go:165] "Request Body" body=""
	I1222 22:53:33.750613  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:33.750990  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:34.249623  146734 type.go:165] "Request Body" body=""
	I1222 22:53:34.249700  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:34.250041  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:34.749622  146734 type.go:165] "Request Body" body=""
	I1222 22:53:34.749696  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:34.749991  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:34.750050  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:35.249689  146734 type.go:165] "Request Body" body=""
	I1222 22:53:35.249763  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:35.250117  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:35.749695  146734 type.go:165] "Request Body" body=""
	I1222 22:53:35.749776  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:35.750097  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:36.249798  146734 type.go:165] "Request Body" body=""
	I1222 22:53:36.249903  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:36.250277  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:36.749846  146734 type.go:165] "Request Body" body=""
	I1222 22:53:36.749925  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:36.750288  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:36.750366  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:37.249900  146734 type.go:165] "Request Body" body=""
	I1222 22:53:37.249980  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:37.250334  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:37.750193  146734 type.go:165] "Request Body" body=""
	I1222 22:53:37.750266  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:37.750582  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:38.250318  146734 type.go:165] "Request Body" body=""
	I1222 22:53:38.250402  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:38.250780  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:38.750488  146734 type.go:165] "Request Body" body=""
	I1222 22:53:38.750565  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:38.750945  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:38.751009  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:39.250555  146734 type.go:165] "Request Body" body=""
	I1222 22:53:39.250638  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:39.250958  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:39.749568  146734 type.go:165] "Request Body" body=""
	I1222 22:53:39.749673  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:39.750024  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:40.249671  146734 type.go:165] "Request Body" body=""
	I1222 22:53:40.249757  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:40.250102  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:40.749680  146734 type.go:165] "Request Body" body=""
	I1222 22:53:40.749755  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:40.750071  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:41.249740  146734 type.go:165] "Request Body" body=""
	I1222 22:53:41.249827  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:41.250168  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:41.250233  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:41.749722  146734 type.go:165] "Request Body" body=""
	I1222 22:53:41.749804  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:41.750139  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:42.249731  146734 type.go:165] "Request Body" body=""
	I1222 22:53:42.249824  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:42.250150  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:42.750207  146734 type.go:165] "Request Body" body=""
	I1222 22:53:42.750296  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:42.750623  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:43.250305  146734 type.go:165] "Request Body" body=""
	I1222 22:53:43.250405  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:43.250821  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:43.250898  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:43.750513  146734 type.go:165] "Request Body" body=""
	I1222 22:53:43.750619  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:43.750993  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:44.249564  146734 type.go:165] "Request Body" body=""
	I1222 22:53:44.249700  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:44.250049  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:44.749717  146734 type.go:165] "Request Body" body=""
	I1222 22:53:44.749793  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:44.750110  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:45.249673  146734 type.go:165] "Request Body" body=""
	I1222 22:53:45.249765  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:45.250110  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:45.749727  146734 type.go:165] "Request Body" body=""
	I1222 22:53:45.749834  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:45.750168  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:45.750236  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:46.249739  146734 type.go:165] "Request Body" body=""
	I1222 22:53:46.249823  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:46.250174  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:46.749761  146734 type.go:165] "Request Body" body=""
	I1222 22:53:46.749836  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:46.750164  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:47.249735  146734 type.go:165] "Request Body" body=""
	I1222 22:53:47.249821  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:47.250177  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:47.749948  146734 type.go:165] "Request Body" body=""
	I1222 22:53:47.750043  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:47.750397  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:47.750471  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:48.249836  146734 type.go:165] "Request Body" body=""
	I1222 22:53:48.249915  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:48.250168  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:48.749809  146734 type.go:165] "Request Body" body=""
	I1222 22:53:48.749888  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:48.750240  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:49.249818  146734 type.go:165] "Request Body" body=""
	I1222 22:53:49.249899  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:49.250266  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:49.749713  146734 type.go:165] "Request Body" body=""
	I1222 22:53:49.749790  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:49.750104  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:50.249693  146734 type.go:165] "Request Body" body=""
	I1222 22:53:50.249771  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:50.250117  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:50.250187  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:50.749730  146734 type.go:165] "Request Body" body=""
	I1222 22:53:50.749812  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:50.750159  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:51.249780  146734 type.go:165] "Request Body" body=""
	I1222 22:53:51.249878  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:51.250247  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:51.749718  146734 type.go:165] "Request Body" body=""
	I1222 22:53:51.749796  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:51.750134  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:52.249742  146734 type.go:165] "Request Body" body=""
	I1222 22:53:52.249827  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:52.250156  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:52.250222  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:52.750193  146734 type.go:165] "Request Body" body=""
	I1222 22:53:52.750262  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:52.750631  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:53.250282  146734 type.go:165] "Request Body" body=""
	I1222 22:53:53.250365  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:53.250710  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:53.750364  146734 type.go:165] "Request Body" body=""
	I1222 22:53:53.750466  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:53.750806  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:54.250614  146734 type.go:165] "Request Body" body=""
	I1222 22:53:54.250720  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:54.251091  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:54.251160  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:54.749719  146734 type.go:165] "Request Body" body=""
	I1222 22:53:54.749797  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:54.750115  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:55.249723  146734 type.go:165] "Request Body" body=""
	I1222 22:53:55.249798  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:55.250115  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:55.749721  146734 type.go:165] "Request Body" body=""
	I1222 22:53:55.749813  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:55.750120  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:56.249788  146734 type.go:165] "Request Body" body=""
	I1222 22:53:56.249864  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:56.250194  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:56.749823  146734 type.go:165] "Request Body" body=""
	I1222 22:53:56.749924  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:56.750260  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:56.750323  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:57.249824  146734 type.go:165] "Request Body" body=""
	I1222 22:53:57.249911  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:57.250258  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:57.750028  146734 type.go:165] "Request Body" body=""
	I1222 22:53:57.750101  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:57.750434  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:58.250067  146734 type.go:165] "Request Body" body=""
	I1222 22:53:58.250165  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:58.250680  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:58.750319  146734 type.go:165] "Request Body" body=""
	I1222 22:53:58.750397  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:58.750751  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:58.750816  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:59.250396  146734 type.go:165] "Request Body" body=""
	I1222 22:53:59.250469  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:59.250838  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:59.750501  146734 type.go:165] "Request Body" body=""
	I1222 22:53:59.750579  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:59.750961  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:00.250615  146734 type.go:165] "Request Body" body=""
	I1222 22:54:00.250698  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:00.251022  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:00.749626  146734 type.go:165] "Request Body" body=""
	I1222 22:54:00.749712  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:00.750067  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:01.249664  146734 type.go:165] "Request Body" body=""
	I1222 22:54:01.249759  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:01.250103  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:01.250170  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:01.749661  146734 type.go:165] "Request Body" body=""
	I1222 22:54:01.749759  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:01.750063  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:02.249721  146734 type.go:165] "Request Body" body=""
	I1222 22:54:02.249806  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:02.250139  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:02.750137  146734 type.go:165] "Request Body" body=""
	I1222 22:54:02.750239  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:02.750626  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:03.249783  146734 type.go:165] "Request Body" body=""
	I1222 22:54:03.249870  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:03.250198  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:03.250261  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:03.749859  146734 type.go:165] "Request Body" body=""
	I1222 22:54:03.749942  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:03.750266  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:04.249837  146734 type.go:165] "Request Body" body=""
	I1222 22:54:04.249924  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:04.250252  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:04.749814  146734 type.go:165] "Request Body" body=""
	I1222 22:54:04.749888  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:04.750218  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:05.249838  146734 type.go:165] "Request Body" body=""
	I1222 22:54:05.249920  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:05.250251  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:05.250311  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:05.749869  146734 type.go:165] "Request Body" body=""
	I1222 22:54:05.749946  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:05.750283  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:06.249832  146734 type.go:165] "Request Body" body=""
	I1222 22:54:06.249906  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:06.250221  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:06.749788  146734 type.go:165] "Request Body" body=""
	I1222 22:54:06.749871  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:06.750209  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:07.249776  146734 type.go:165] "Request Body" body=""
	I1222 22:54:07.249854  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:07.250183  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:07.749835  146734 type.go:165] "Request Body" body=""
	I1222 22:54:07.749920  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:07.750202  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:07.750265  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:08.249824  146734 type.go:165] "Request Body" body=""
	I1222 22:54:08.249910  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:08.250234  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:08.749844  146734 type.go:165] "Request Body" body=""
	I1222 22:54:08.749966  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:08.750291  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:09.249871  146734 type.go:165] "Request Body" body=""
	I1222 22:54:09.249975  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:09.250275  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:09.749927  146734 type.go:165] "Request Body" body=""
	I1222 22:54:09.750002  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:09.750347  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:09.750418  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:10.249886  146734 type.go:165] "Request Body" body=""
	I1222 22:54:10.249966  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:10.250298  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:10.749734  146734 type.go:165] "Request Body" body=""
	I1222 22:54:10.749817  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:10.750166  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:11.249744  146734 type.go:165] "Request Body" body=""
	I1222 22:54:11.249820  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:11.250144  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:11.749770  146734 type.go:165] "Request Body" body=""
	I1222 22:54:11.749862  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:11.750201  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:12.249794  146734 type.go:165] "Request Body" body=""
	I1222 22:54:12.249873  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:12.250162  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:12.250235  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:12.750120  146734 type.go:165] "Request Body" body=""
	I1222 22:54:12.750215  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:12.750539  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:13.250225  146734 type.go:165] "Request Body" body=""
	I1222 22:54:13.250297  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:13.250634  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:13.750359  146734 type.go:165] "Request Body" body=""
	I1222 22:54:13.750457  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:13.750827  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:14.250483  146734 type.go:165] "Request Body" body=""
	I1222 22:54:14.250556  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:14.250898  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:14.250968  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:14.750562  146734 type.go:165] "Request Body" body=""
	I1222 22:54:14.750659  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:14.750987  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:15.250672  146734 type.go:165] "Request Body" body=""
	I1222 22:54:15.250773  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:15.251113  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:15.749701  146734 type.go:165] "Request Body" body=""
	I1222 22:54:15.749784  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:15.750120  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:16.249715  146734 type.go:165] "Request Body" body=""
	I1222 22:54:16.249790  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:16.250112  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:16.749747  146734 type.go:165] "Request Body" body=""
	I1222 22:54:16.749839  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:16.750218  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:16.750293  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:17.249782  146734 type.go:165] "Request Body" body=""
	I1222 22:54:17.249860  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:17.250177  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:17.750000  146734 type.go:165] "Request Body" body=""
	I1222 22:54:17.750088  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:17.750446  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:18.249735  146734 type.go:165] "Request Body" body=""
	I1222 22:54:18.249825  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:18.250179  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:18.749732  146734 type.go:165] "Request Body" body=""
	I1222 22:54:18.749816  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:18.750154  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:19.249741  146734 type.go:165] "Request Body" body=""
	I1222 22:54:19.249830  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:19.250196  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:19.250264  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:19.749730  146734 type.go:165] "Request Body" body=""
	I1222 22:54:19.749819  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:19.750126  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:20.249707  146734 type.go:165] "Request Body" body=""
	I1222 22:54:20.249785  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:20.250091  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:20.749724  146734 type.go:165] "Request Body" body=""
	I1222 22:54:20.749813  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:20.750150  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:21.249721  146734 type.go:165] "Request Body" body=""
	I1222 22:54:21.249797  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:21.250083  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:21.749694  146734 type.go:165] "Request Body" body=""
	I1222 22:54:21.749796  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:21.750142  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:21.750217  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:22.249754  146734 type.go:165] "Request Body" body=""
	I1222 22:54:22.249830  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:22.250160  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:22.750118  146734 type.go:165] "Request Body" body=""
	I1222 22:54:22.750196  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:22.750523  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:23.250322  146734 type.go:165] "Request Body" body=""
	I1222 22:54:23.250400  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:23.250767  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:23.750404  146734 type.go:165] "Request Body" body=""
	I1222 22:54:23.750488  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:23.750857  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:23.750933  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:24.249571  146734 type.go:165] "Request Body" body=""
	I1222 22:54:24.249681  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:24.250051  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:24.749643  146734 type.go:165] "Request Body" body=""
	I1222 22:54:24.749721  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:24.750066  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:25.249682  146734 type.go:165] "Request Body" body=""
	I1222 22:54:25.249768  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:25.250100  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:25.749662  146734 type.go:165] "Request Body" body=""
	I1222 22:54:25.749739  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:25.750065  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:26.249723  146734 type.go:165] "Request Body" body=""
	I1222 22:54:26.249806  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:26.250109  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:26.250210  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:26.749706  146734 type.go:165] "Request Body" body=""
	I1222 22:54:26.749790  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:26.750145  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:27.249715  146734 type.go:165] "Request Body" body=""
	I1222 22:54:27.249788  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:27.250030  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:27.749914  146734 type.go:165] "Request Body" body=""
	I1222 22:54:27.749988  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:27.750304  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:28.249902  146734 type.go:165] "Request Body" body=""
	I1222 22:54:28.249990  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:28.250337  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:28.250411  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:28.749874  146734 type.go:165] "Request Body" body=""
	I1222 22:54:28.749948  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:28.750244  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:29.249918  146734 type.go:165] "Request Body" body=""
	I1222 22:54:29.249996  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:29.250346  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:29.749881  146734 type.go:165] "Request Body" body=""
	I1222 22:54:29.749960  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:29.750287  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:30.249721  146734 type.go:165] "Request Body" body=""
	I1222 22:54:30.249800  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:30.250101  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:30.749693  146734 type.go:165] "Request Body" body=""
	I1222 22:54:30.749779  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:30.750112  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:30.750186  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:31.249759  146734 type.go:165] "Request Body" body=""
	I1222 22:54:31.249844  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:31.250182  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:31.749729  146734 type.go:165] "Request Body" body=""
	I1222 22:54:31.749814  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:31.750130  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:32.249782  146734 type.go:165] "Request Body" body=""
	I1222 22:54:32.249862  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:32.250186  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:32.750121  146734 type.go:165] "Request Body" body=""
	I1222 22:54:32.750207  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:32.750542  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:32.750634  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:33.249784  146734 type.go:165] "Request Body" body=""
	I1222 22:54:33.249855  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:33.250171  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:33.749810  146734 type.go:165] "Request Body" body=""
	I1222 22:54:33.749888  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:33.750215  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:34.249774  146734 type.go:165] "Request Body" body=""
	I1222 22:54:34.249851  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:34.250176  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:34.749708  146734 type.go:165] "Request Body" body=""
	I1222 22:54:34.749777  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:34.750090  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:35.249688  146734 type.go:165] "Request Body" body=""
	I1222 22:54:35.249766  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:35.250089  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:35.250148  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:35.749687  146734 type.go:165] "Request Body" body=""
	I1222 22:54:35.749759  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:35.750079  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:36.249639  146734 type.go:165] "Request Body" body=""
	I1222 22:54:36.249710  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:36.250032  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:36.749672  146734 type.go:165] "Request Body" body=""
	I1222 22:54:36.749746  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:36.750070  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:37.249695  146734 type.go:165] "Request Body" body=""
	I1222 22:54:37.249776  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:37.250115  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:37.250177  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:37.750002  146734 type.go:165] "Request Body" body=""
	I1222 22:54:37.750095  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:37.750456  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:38.250050  146734 type.go:165] "Request Body" body=""
	I1222 22:54:38.250125  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:38.250452  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:38.749992  146734 type.go:165] "Request Body" body=""
	I1222 22:54:38.750073  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:38.750425  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:39.249707  146734 type.go:165] "Request Body" body=""
	I1222 22:54:39.249774  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:39.250018  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:39.749673  146734 type.go:165] "Request Body" body=""
	I1222 22:54:39.749773  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:39.750117  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:39.750178  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:40.249716  146734 type.go:165] "Request Body" body=""
	I1222 22:54:40.249789  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:40.250105  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:40.749704  146734 type.go:165] "Request Body" body=""
	I1222 22:54:40.749797  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:40.750133  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:41.249799  146734 type.go:165] "Request Body" body=""
	I1222 22:54:41.249873  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:41.250194  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:41.749789  146734 type.go:165] "Request Body" body=""
	I1222 22:54:41.749883  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:41.750225  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:41.750287  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:42.249727  146734 type.go:165] "Request Body" body=""
	I1222 22:54:42.249800  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:42.250096  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:42.750174  146734 type.go:165] "Request Body" body=""
	I1222 22:54:42.750257  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:42.750651  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:43.250236  146734 type.go:165] "Request Body" body=""
	I1222 22:54:43.250313  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:43.250694  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:43.750289  146734 type.go:165] "Request Body" body=""
	I1222 22:54:43.750355  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:43.750686  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:43.750759  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:44.250285  146734 type.go:165] "Request Body" body=""
	I1222 22:54:44.250357  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:44.250709  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:44.750302  146734 type.go:165] "Request Body" body=""
	I1222 22:54:44.750384  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:44.750746  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:45.250350  146734 type.go:165] "Request Body" body=""
	I1222 22:54:45.250430  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:45.250764  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:45.750440  146734 type.go:165] "Request Body" body=""
	I1222 22:54:45.750515  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:45.750874  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:45.750949  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:46.250491  146734 type.go:165] "Request Body" body=""
	I1222 22:54:46.250567  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:46.250913  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:46.750576  146734 type.go:165] "Request Body" body=""
	I1222 22:54:46.750673  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:46.751016  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:47.249570  146734 type.go:165] "Request Body" body=""
	I1222 22:54:47.249660  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:47.249996  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:47.749731  146734 type.go:165] "Request Body" body=""
	I1222 22:54:47.749824  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:47.750169  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:48.249741  146734 type.go:165] "Request Body" body=""
	I1222 22:54:48.249822  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:48.250159  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:48.250226  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:48.749795  146734 type.go:165] "Request Body" body=""
	I1222 22:54:48.749894  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:48.750223  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:49.249848  146734 type.go:165] "Request Body" body=""
	I1222 22:54:49.249934  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:49.250267  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:49.749720  146734 type.go:165] "Request Body" body=""
	I1222 22:54:49.749804  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:49.750126  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:50.249670  146734 type.go:165] "Request Body" body=""
	I1222 22:54:50.249750  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:50.250079  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:50.749692  146734 type.go:165] "Request Body" body=""
	I1222 22:54:50.749766  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:50.750099  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:50.750164  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:51.249654  146734 type.go:165] "Request Body" body=""
	I1222 22:54:51.249734  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:51.250056  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:51.749698  146734 type.go:165] "Request Body" body=""
	I1222 22:54:51.749776  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:51.750104  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:52.249708  146734 type.go:165] "Request Body" body=""
	I1222 22:54:52.249803  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:52.250143  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:52.750168  146734 type.go:165] "Request Body" body=""
	I1222 22:54:52.750240  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:52.750636  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:52.750725  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:53.250283  146734 type.go:165] "Request Body" body=""
	I1222 22:54:53.250369  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:53.250696  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:53.750388  146734 type.go:165] "Request Body" body=""
	I1222 22:54:53.750469  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:53.750824  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:54.250460  146734 type.go:165] "Request Body" body=""
	I1222 22:54:54.250538  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:54.250871  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:54.750538  146734 type.go:165] "Request Body" body=""
	I1222 22:54:54.750645  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:54.750985  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:54.751057  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:55.249649  146734 type.go:165] "Request Body" body=""
	I1222 22:54:55.249732  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:55.250057  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:55.749707  146734 type.go:165] "Request Body" body=""
	I1222 22:54:55.749790  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:55.750108  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:56.249688  146734 type.go:165] "Request Body" body=""
	I1222 22:54:56.249766  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:56.250095  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:56.749726  146734 type.go:165] "Request Body" body=""
	I1222 22:54:56.749805  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:56.750146  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:57.249569  146734 type.go:165] "Request Body" body=""
	I1222 22:54:57.249701  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:57.250071  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:57.250148  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:57.750019  146734 type.go:165] "Request Body" body=""
	I1222 22:54:57.750094  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:57.750442  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:58.249983  146734 type.go:165] "Request Body" body=""
	I1222 22:54:58.250056  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:58.250388  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:58.749735  146734 type.go:165] "Request Body" body=""
	I1222 22:54:58.749813  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:58.750122  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:59.249693  146734 type.go:165] "Request Body" body=""
	I1222 22:54:59.249770  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:59.250112  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:59.250182  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:59.749735  146734 type.go:165] "Request Body" body=""
	I1222 22:54:59.749823  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:59.750156  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:00.249747  146734 type.go:165] "Request Body" body=""
	I1222 22:55:00.249832  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:00.250162  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:00.749730  146734 type.go:165] "Request Body" body=""
	I1222 22:55:00.749817  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:00.750139  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:01.249720  146734 type.go:165] "Request Body" body=""
	I1222 22:55:01.249796  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:01.250170  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:01.250234  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:01.749747  146734 type.go:165] "Request Body" body=""
	I1222 22:55:01.749834  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:01.750177  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:02.249773  146734 type.go:165] "Request Body" body=""
	I1222 22:55:02.249853  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:02.250190  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:02.750185  146734 type.go:165] "Request Body" body=""
	I1222 22:55:02.750272  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:02.750679  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:03.250410  146734 type.go:165] "Request Body" body=""
	I1222 22:55:03.250484  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:03.250800  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:03.250864  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:03.750490  146734 type.go:165] "Request Body" body=""
	I1222 22:55:03.750574  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:03.750953  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:04.250588  146734 type.go:165] "Request Body" body=""
	I1222 22:55:04.250700  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:04.251072  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:04.749564  146734 type.go:165] "Request Body" body=""
	I1222 22:55:04.749679  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:04.750072  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:05.249662  146734 type.go:165] "Request Body" body=""
	I1222 22:55:05.249748  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:05.250095  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:05.749749  146734 type.go:165] "Request Body" body=""
	I1222 22:55:05.749828  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:05.750162  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:05.750227  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:06.249723  146734 type.go:165] "Request Body" body=""
	I1222 22:55:06.249815  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:06.250126  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:06.749723  146734 type.go:165] "Request Body" body=""
	I1222 22:55:06.749809  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:06.750146  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:07.249732  146734 type.go:165] "Request Body" body=""
	I1222 22:55:07.249819  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:07.250155  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:07.749808  146734 type.go:165] "Request Body" body=""
	I1222 22:55:07.749885  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:07.750206  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:07.750279  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:08.249799  146734 type.go:165] "Request Body" body=""
	I1222 22:55:08.249870  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:08.250200  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:08.749784  146734 type.go:165] "Request Body" body=""
	I1222 22:55:08.749858  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:08.750157  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:09.249830  146734 type.go:165] "Request Body" body=""
	I1222 22:55:09.249933  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:09.250259  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:09.749895  146734 type.go:165] "Request Body" body=""
	I1222 22:55:09.749973  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:09.750307  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:09.750375  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:10.249899  146734 type.go:165] "Request Body" body=""
	I1222 22:55:10.249974  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:10.250316  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:10.749711  146734 type.go:165] "Request Body" body=""
	I1222 22:55:10.749786  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:10.750166  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:11.249748  146734 type.go:165] "Request Body" body=""
	I1222 22:55:11.249819  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:11.250148  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:11.749856  146734 type.go:165] "Request Body" body=""
	I1222 22:55:11.749994  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:11.750335  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:12.249957  146734 type.go:165] "Request Body" body=""
	I1222 22:55:12.250025  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:12.250328  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:12.250386  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:12.750340  146734 type.go:165] "Request Body" body=""
	I1222 22:55:12.750433  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:12.750899  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:13.250509  146734 type.go:165] "Request Body" body=""
	I1222 22:55:13.250620  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:13.250953  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:13.750574  146734 type.go:165] "Request Body" body=""
	I1222 22:55:13.750664  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:13.750913  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:14.249616  146734 type.go:165] "Request Body" body=""
	I1222 22:55:14.249702  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:14.250052  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:14.749677  146734 type.go:165] "Request Body" body=""
	I1222 22:55:14.749762  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:14.750119  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:14.750178  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:15.249707  146734 type.go:165] "Request Body" body=""
	I1222 22:55:15.249782  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:15.250113  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:15.749689  146734 type.go:165] "Request Body" body=""
	I1222 22:55:15.749763  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:15.750110  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:16.249711  146734 type.go:165] "Request Body" body=""
	I1222 22:55:16.249796  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:16.250119  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:16.749704  146734 type.go:165] "Request Body" body=""
	I1222 22:55:16.749779  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:16.750098  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:17.249718  146734 type.go:165] "Request Body" body=""
	I1222 22:55:17.249799  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:17.250129  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:17.250204  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:17.749909  146734 type.go:165] "Request Body" body=""
	I1222 22:55:17.750010  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:17.750386  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:18.250005  146734 type.go:165] "Request Body" body=""
	I1222 22:55:18.250097  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:18.250519  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:18.750237  146734 type.go:165] "Request Body" body=""
	I1222 22:55:18.750318  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:18.750671  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:19.250360  146734 type.go:165] "Request Body" body=""
	I1222 22:55:19.250435  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:19.250782  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:19.250850  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:19.750393  146734 type.go:165] "Request Body" body=""
	I1222 22:55:19.750473  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:19.750812  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:20.250244  146734 type.go:165] "Request Body" body=""
	I1222 22:55:20.250340  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:20.250766  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:20.749617  146734 type.go:165] "Request Body" body=""
	I1222 22:55:20.749731  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:20.750232  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:21.249738  146734 type.go:165] "Request Body" body=""
	I1222 22:55:21.249818  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:21.250145  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:21.749880  146734 type.go:165] "Request Body" body=""
	I1222 22:55:21.749962  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:21.750345  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:21.750422  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:22.249651  146734 type.go:165] "Request Body" body=""
	I1222 22:55:22.249745  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:22.250098  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:22.750064  146734 type.go:165] "Request Body" body=""
	I1222 22:55:22.750133  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:22.750512  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:23.250053  146734 type.go:165] "Request Body" body=""
	I1222 22:55:23.250126  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:23.250447  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:23.750003  146734 type.go:165] "Request Body" body=""
	I1222 22:55:23.750079  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:23.750487  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:23.750580  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:24.249755  146734 type.go:165] "Request Body" body=""
	I1222 22:55:24.249827  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:24.250165  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:24.749758  146734 type.go:165] "Request Body" body=""
	I1222 22:55:24.749828  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:24.750192  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:25.249696  146734 type.go:165] "Request Body" body=""
	I1222 22:55:25.249769  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:25.250072  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:25.749697  146734 type.go:165] "Request Body" body=""
	I1222 22:55:25.749783  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:25.750159  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:26.249886  146734 type.go:165] "Request Body" body=""
	I1222 22:55:26.249958  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:26.250275  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:26.250336  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:26.750067  146734 type.go:165] "Request Body" body=""
	I1222 22:55:26.750154  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:26.750517  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:27.250329  146734 type.go:165] "Request Body" body=""
	I1222 22:55:27.250410  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:27.250697  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:27.750580  146734 type.go:165] "Request Body" body=""
	I1222 22:55:27.750669  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:27.751022  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:28.249726  146734 type.go:165] "Request Body" body=""
	I1222 22:55:28.249808  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:28.250109  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:28.749730  146734 type.go:165] "Request Body" body=""
	I1222 22:55:28.749811  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:28.750156  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:28.750237  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:29.249909  146734 type.go:165] "Request Body" body=""
	I1222 22:55:29.249982  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:29.250305  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:29.750035  146734 type.go:165] "Request Body" body=""
	I1222 22:55:29.750108  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:29.750450  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:30.250215  146734 type.go:165] "Request Body" body=""
	I1222 22:55:30.250285  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:30.250646  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:30.750488  146734 type.go:165] "Request Body" body=""
	I1222 22:55:30.750567  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:30.750921  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:30.750983  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:31.249701  146734 type.go:165] "Request Body" body=""
	I1222 22:55:31.249780  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:31.250099  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:31.749724  146734 type.go:165] "Request Body" body=""
	I1222 22:55:31.749792  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:31.750145  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:32.249871  146734 type.go:165] "Request Body" body=""
	I1222 22:55:32.249942  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:32.250315  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:32.750300  146734 type.go:165] "Request Body" body=""
	I1222 22:55:32.750384  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:32.750771  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:33.250608  146734 type.go:165] "Request Body" body=""
	I1222 22:55:33.250702  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:33.251142  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:33.251214  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:33.749937  146734 type.go:165] "Request Body" body=""
	I1222 22:55:33.750014  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:33.750358  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:34.250212  146734 type.go:165] "Request Body" body=""
	I1222 22:55:34.250294  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:34.250661  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:34.750449  146734 type.go:165] "Request Body" body=""
	I1222 22:55:34.750521  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:34.750895  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:35.249645  146734 type.go:165] "Request Body" body=""
	I1222 22:55:35.249716  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:35.250054  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:35.749839  146734 type.go:165] "Request Body" body=""
	I1222 22:55:35.749916  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:35.750258  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:35.750321  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:36.249744  146734 type.go:165] "Request Body" body=""
	I1222 22:55:36.249815  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:36.250128  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:36.749704  146734 type.go:165] "Request Body" body=""
	I1222 22:55:36.749780  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:36.750111  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:37.249872  146734 type.go:165] "Request Body" body=""
	I1222 22:55:37.249947  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:37.250270  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:37.750107  146734 type.go:165] "Request Body" body=""
	I1222 22:55:37.750195  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:37.750537  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:37.750607  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:38.250386  146734 type.go:165] "Request Body" body=""
	I1222 22:55:38.250473  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:38.250844  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:38.749618  146734 type.go:165] "Request Body" body=""
	I1222 22:55:38.749699  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:38.750037  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:39.249712  146734 type.go:165] "Request Body" body=""
	I1222 22:55:39.249799  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:39.250114  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:39.749869  146734 type.go:165] "Request Body" body=""
	I1222 22:55:39.749958  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:39.750319  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:40.249717  146734 type.go:165] "Request Body" body=""
	I1222 22:55:40.249789  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:40.250125  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:40.250192  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:40.749723  146734 type.go:165] "Request Body" body=""
	I1222 22:55:40.749805  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:40.750122  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:41.249867  146734 type.go:165] "Request Body" body=""
	I1222 22:55:41.249964  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:41.250300  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:41.750030  146734 type.go:165] "Request Body" body=""
	I1222 22:55:41.750104  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:41.750449  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:42.250237  146734 type.go:165] "Request Body" body=""
	I1222 22:55:42.250316  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:42.250702  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:42.250790  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:42.749698  146734 type.go:165] "Request Body" body=""
	I1222 22:55:42.749778  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:42.750156  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:43.249908  146734 type.go:165] "Request Body" body=""
	I1222 22:55:43.249984  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:43.250316  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:43.749713  146734 type.go:165] "Request Body" body=""
	I1222 22:55:43.749783  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:43.750109  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:44.249847  146734 type.go:165] "Request Body" body=""
	I1222 22:55:44.249920  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:44.250250  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:44.749977  146734 type.go:165] "Request Body" body=""
	I1222 22:55:44.750059  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:44.750429  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:44.750493  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:45.250259  146734 type.go:165] "Request Body" body=""
	I1222 22:55:45.250340  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:45.250671  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:45.750529  146734 type.go:165] "Request Body" body=""
	I1222 22:55:45.750635  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:45.750999  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:46.249762  146734 type.go:165] "Request Body" body=""
	I1222 22:55:46.249839  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:46.250179  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:46.749734  146734 type.go:165] "Request Body" body=""
	I1222 22:55:46.749810  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:46.750159  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:47.249888  146734 type.go:165] "Request Body" body=""
	I1222 22:55:47.249964  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:47.250293  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:47.250367  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:47.750084  146734 type.go:165] "Request Body" body=""
	I1222 22:55:47.750158  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:47.750514  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:48.250329  146734 type.go:165] "Request Body" body=""
	I1222 22:55:48.250397  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:48.250735  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:48.750528  146734 type.go:165] "Request Body" body=""
	I1222 22:55:48.750629  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:48.750960  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:49.249711  146734 type.go:165] "Request Body" body=""
	I1222 22:55:49.249788  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:49.250109  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:49.749756  146734 type.go:165] "Request Body" body=""
	I1222 22:55:49.749840  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:49.750202  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:49.750280  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:50.250567  146734 type.go:165] "Request Body" body=""
	I1222 22:55:50.250668  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:50.251016  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:50.749759  146734 type.go:165] "Request Body" body=""
	I1222 22:55:50.749830  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:50.750146  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:51.249720  146734 type.go:165] "Request Body" body=""
	I1222 22:55:51.249810  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:51.250134  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:51.749846  146734 type.go:165] "Request Body" body=""
	I1222 22:55:51.749921  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:51.750215  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:52.249929  146734 type.go:165] "Request Body" body=""
	I1222 22:55:52.250026  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:52.250348  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:52.250418  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:52.750253  146734 type.go:165] "Request Body" body=""
	I1222 22:55:52.750351  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:52.750714  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:53.250551  146734 type.go:165] "Request Body" body=""
	I1222 22:55:53.250675  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:53.251019  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:53.749793  146734 type.go:165] "Request Body" body=""
	I1222 22:55:53.749867  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:53.750205  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:54.249741  146734 type.go:165] "Request Body" body=""
	I1222 22:55:54.249830  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:54.250172  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:54.749935  146734 type.go:165] "Request Body" body=""
	I1222 22:55:54.750018  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:54.750343  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:54.750406  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:55.250174  146734 type.go:165] "Request Body" body=""
	I1222 22:55:55.250255  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:55.250582  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:55.750396  146734 type.go:165] "Request Body" body=""
	I1222 22:55:55.750466  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:55.750800  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:56.250589  146734 type.go:165] "Request Body" body=""
	I1222 22:55:56.250683  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:56.251003  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:56.749740  146734 type.go:165] "Request Body" body=""
	I1222 22:55:56.749822  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:56.750149  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:57.249700  146734 type.go:165] "Request Body" body=""
	I1222 22:55:57.249767  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:57.250019  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:57.250065  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:57.749911  146734 type.go:165] "Request Body" body=""
	I1222 22:55:57.749984  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:57.750312  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:58.250079  146734 type.go:165] "Request Body" body=""
	I1222 22:55:58.250158  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:58.250482  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:58.749773  146734 type.go:165] "Request Body" body=""
	I1222 22:55:58.749843  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:58.750137  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:59.249988  146734 type.go:165] "Request Body" body=""
	I1222 22:55:59.250074  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:59.250366  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:59.250416  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:59.750170  146734 type.go:165] "Request Body" body=""
	I1222 22:55:59.750247  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:59.750648  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:00.250540  146734 type.go:165] "Request Body" body=""
	I1222 22:56:00.250654  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:00.250961  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:00.749739  146734 type.go:165] "Request Body" body=""
	I1222 22:56:00.749823  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:00.750177  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:01.249895  146734 type.go:165] "Request Body" body=""
	I1222 22:56:01.249986  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:01.250340  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:01.750038  146734 type.go:165] "Request Body" body=""
	I1222 22:56:01.750113  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:01.750490  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:01.750581  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:02.250443  146734 type.go:165] "Request Body" body=""
	I1222 22:56:02.250525  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:02.250895  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:02.749821  146734 type.go:165] "Request Body" body=""
	I1222 22:56:02.749917  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:02.750273  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:03.250163  146734 type.go:165] "Request Body" body=""
	I1222 22:56:03.250251  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:03.250624  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:03.750411  146734 type.go:165] "Request Body" body=""
	I1222 22:56:03.750512  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:03.750893  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:03.750957  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:04.249658  146734 type.go:165] "Request Body" body=""
	I1222 22:56:04.249737  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:04.250088  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:04.749706  146734 type.go:165] "Request Body" body=""
	I1222 22:56:04.749798  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:04.750106  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:05.249956  146734 type.go:165] "Request Body" body=""
	I1222 22:56:05.250031  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:05.250365  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:05.750235  146734 type.go:165] "Request Body" body=""
	I1222 22:56:05.750322  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:05.750688  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:06.250487  146734 type.go:165] "Request Body" body=""
	I1222 22:56:06.250559  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:06.250842  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:06.250893  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:06.749620  146734 type.go:165] "Request Body" body=""
	I1222 22:56:06.749705  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:06.750048  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:07.249785  146734 type.go:165] "Request Body" body=""
	I1222 22:56:07.249875  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:07.250216  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:07.750003  146734 type.go:165] "Request Body" body=""
	I1222 22:56:07.750078  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:07.750408  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:08.250236  146734 type.go:165] "Request Body" body=""
	I1222 22:56:08.250327  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:08.250694  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:08.750527  146734 type.go:165] "Request Body" body=""
	I1222 22:56:08.750631  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:08.750990  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:08.751068  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:09.249747  146734 type.go:165] "Request Body" body=""
	I1222 22:56:09.249842  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:09.250172  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:09.749886  146734 type.go:165] "Request Body" body=""
	I1222 22:56:09.749962  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:09.750308  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:10.249647  146734 type.go:165] "Request Body" body=""
	I1222 22:56:10.249737  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:10.250057  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:10.749709  146734 type.go:165] "Request Body" body=""
	I1222 22:56:10.749783  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:10.750127  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:11.249845  146734 type.go:165] "Request Body" body=""
	I1222 22:56:11.249934  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:11.250260  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:11.250328  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:11.749676  146734 type.go:165] "Request Body" body=""
	I1222 22:56:11.749752  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:11.750075  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:12.249809  146734 type.go:165] "Request Body" body=""
	I1222 22:56:12.249905  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:12.250221  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:12.750192  146734 type.go:165] "Request Body" body=""
	I1222 22:56:12.750274  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:12.750624  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:13.250511  146734 type.go:165] "Request Body" body=""
	I1222 22:56:13.250619  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:13.250949  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:13.251021  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:13.749706  146734 type.go:165] "Request Body" body=""
	I1222 22:56:13.749791  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:13.750112  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:14.249837  146734 type.go:165] "Request Body" body=""
	I1222 22:56:14.249926  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:14.250269  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:14.749994  146734 type.go:165] "Request Body" body=""
	I1222 22:56:14.750085  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:14.750445  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:15.250241  146734 type.go:165] "Request Body" body=""
	I1222 22:56:15.250328  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:15.250679  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:15.750491  146734 type.go:165] "Request Body" body=""
	I1222 22:56:15.750582  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:15.750940  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:15.751003  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:16.249702  146734 type.go:165] "Request Body" body=""
	I1222 22:56:16.249792  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:16.250131  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:16.749842  146734 type.go:165] "Request Body" body=""
	I1222 22:56:16.749926  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:16.750230  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:17.249944  146734 type.go:165] "Request Body" body=""
	I1222 22:56:17.250027  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:17.250345  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:17.750037  146734 type.go:165] "Request Body" body=""
	I1222 22:56:17.750126  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:17.750464  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:18.249806  146734 type.go:165] "Request Body" body=""
	I1222 22:56:18.249894  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:18.250221  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:18.250280  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:18.749968  146734 type.go:165] "Request Body" body=""
	I1222 22:56:18.750042  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:18.750375  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:19.250188  146734 type.go:165] "Request Body" body=""
	I1222 22:56:19.250274  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:19.250616  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:19.750444  146734 type.go:165] "Request Body" body=""
	I1222 22:56:19.750535  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:19.750879  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:20.249610  146734 type.go:165] "Request Body" body=""
	I1222 22:56:20.249709  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:20.250048  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:20.749785  146734 type.go:165] "Request Body" body=""
	I1222 22:56:20.749875  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:20.750205  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:20.750279  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:21.249734  146734 type.go:165] "Request Body" body=""
	I1222 22:56:21.249827  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:21.250200  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:21.749682  146734 type.go:165] "Request Body" body=""
	I1222 22:56:21.749765  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:21.750114  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:22.249836  146734 type.go:165] "Request Body" body=""
	I1222 22:56:22.249924  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:22.250264  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:22.750260  146734 type.go:165] "Request Body" body=""
	I1222 22:56:22.750374  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:22.750715  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:22.750789  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:23.250585  146734 type.go:165] "Request Body" body=""
	I1222 22:56:23.250698  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:23.251037  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:23.749764  146734 type.go:165] "Request Body" body=""
	I1222 22:56:23.749845  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:23.750177  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:24.249961  146734 type.go:165] "Request Body" body=""
	I1222 22:56:24.250049  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:24.250374  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:24.750225  146734 type.go:165] "Request Body" body=""
	I1222 22:56:24.750310  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:24.750702  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:25.250611  146734 type.go:165] "Request Body" body=""
	I1222 22:56:25.250705  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:25.251045  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:25.251126  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:25.749728  146734 type.go:165] "Request Body" body=""
	I1222 22:56:25.749801  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:25.750109  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:26.249827  146734 type.go:165] "Request Body" body=""
	I1222 22:56:26.249913  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:26.250265  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:26.750059  146734 type.go:165] "Request Body" body=""
	I1222 22:56:26.750142  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:26.750486  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:27.250307  146734 type.go:165] "Request Body" body=""
	I1222 22:56:27.250390  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:27.250801  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:27.749556  146734 type.go:165] "Request Body" body=""
	I1222 22:56:27.749670  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:27.749997  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:27.750062  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:28.249747  146734 type.go:165] "Request Body" body=""
	I1222 22:56:28.249843  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:28.250181  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:28.749928  146734 type.go:165] "Request Body" body=""
	I1222 22:56:28.750013  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:28.750333  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:29.250176  146734 type.go:165] "Request Body" body=""
	I1222 22:56:29.250253  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:29.250628  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:29.750430  146734 type.go:165] "Request Body" body=""
	I1222 22:56:29.750516  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:29.750880  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:29.750941  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:30.249655  146734 type.go:165] "Request Body" body=""
	I1222 22:56:30.249745  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:30.250057  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:30.749762  146734 type.go:165] "Request Body" body=""
	I1222 22:56:30.749841  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:30.750176  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:31.249900  146734 type.go:165] "Request Body" body=""
	I1222 22:56:31.249991  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:31.250332  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:31.749728  146734 type.go:165] "Request Body" body=""
	I1222 22:56:31.749800  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:31.750161  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:32.249910  146734 type.go:165] "Request Body" body=""
	I1222 22:56:32.249987  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:32.250372  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:32.250439  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:32.750202  146734 type.go:165] "Request Body" body=""
	I1222 22:56:32.750290  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:32.750667  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:33.250449  146734 type.go:165] "Request Body" body=""
	I1222 22:56:33.250524  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:33.250855  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:33.749570  146734 type.go:165] "Request Body" body=""
	I1222 22:56:33.749674  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:33.750002  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:34.249728  146734 type.go:165] "Request Body" body=""
	I1222 22:56:34.249812  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:34.250144  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:34.749746  146734 type.go:165] "Request Body" body=""
	I1222 22:56:34.749820  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:34.750119  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:34.750176  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:35.249857  146734 type.go:165] "Request Body" body=""
	I1222 22:56:35.249936  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:35.250259  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:35.750048  146734 type.go:165] "Request Body" body=""
	I1222 22:56:35.750143  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:35.750524  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:36.250349  146734 type.go:165] "Request Body" body=""
	I1222 22:56:36.250426  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:36.250769  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:36.750203  146734 type.go:165] "Request Body" body=""
	I1222 22:56:36.750279  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:36.750663  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:36.750725  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:37.250497  146734 type.go:165] "Request Body" body=""
	I1222 22:56:37.250572  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:37.251039  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:37.749807  146734 type.go:165] "Request Body" body=""
	I1222 22:56:37.749884  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:37.750203  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:38.249961  146734 type.go:165] "Request Body" body=""
	I1222 22:56:38.250044  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:38.250402  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:38.750242  146734 type.go:165] "Request Body" body=""
	I1222 22:56:38.750337  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:38.750714  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:38.750783  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:39.249573  146734 type.go:165] "Request Body" body=""
	I1222 22:56:39.249685  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:39.250006  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:39.749755  146734 type.go:165] "Request Body" body=""
	I1222 22:56:39.749837  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:39.750176  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:40.249985  146734 type.go:165] "Request Body" body=""
	I1222 22:56:40.250068  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:40.250462  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:40.749791  146734 type.go:165] "Request Body" body=""
	I1222 22:56:40.749864  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:40.750168  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:41.249972  146734 type.go:165] "Request Body" body=""
	I1222 22:56:41.250067  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:41.250448  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:41.250511  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:41.750285  146734 type.go:165] "Request Body" body=""
	I1222 22:56:41.750360  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:41.750722  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:42.250530  146734 type.go:165] "Request Body" body=""
	I1222 22:56:42.250634  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:42.251005  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:42.750176  146734 type.go:165] "Request Body" body=""
	I1222 22:56:42.750262  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:42.750629  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:43.250421  146734 type.go:165] "Request Body" body=""
	I1222 22:56:43.250500  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:43.250876  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:43.250943  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:43.749662  146734 type.go:165] "Request Body" body=""
	I1222 22:56:43.749749  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:43.750068  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:44.249733  146734 type.go:165] "Request Body" body=""
	I1222 22:56:44.249816  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:44.250156  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:44.749892  146734 type.go:165] "Request Body" body=""
	I1222 22:56:44.749976  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:44.750375  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:45.249835  146734 type.go:165] "Request Body" body=""
	I1222 22:56:45.249925  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:45.250244  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:45.750013  146734 type.go:165] "Request Body" body=""
	I1222 22:56:45.750091  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:45.750446  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:45.750514  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:46.250285  146734 type.go:165] "Request Body" body=""
	I1222 22:56:46.250360  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:46.250755  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:46.749559  146734 type.go:165] "Request Body" body=""
	I1222 22:56:46.749670  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:46.750010  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:47.249802  146734 type.go:165] "Request Body" body=""
	I1222 22:56:47.249878  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:47.250237  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:47.749902  146734 type.go:165] "Request Body" body=""
	I1222 22:56:47.750019  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:47.750394  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:48.250358  146734 type.go:165] "Request Body" body=""
	I1222 22:56:48.250462  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:48.250882  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:48.250970  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:48.749733  146734 type.go:165] "Request Body" body=""
	I1222 22:56:48.749822  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:48.750138  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:49.249857  146734 type.go:165] "Request Body" body=""
	I1222 22:56:49.249934  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:49.250272  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:49.750058  146734 type.go:165] "Request Body" body=""
	I1222 22:56:49.750161  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:49.750546  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:50.250534  146734 type.go:165] "Request Body" body=""
	I1222 22:56:50.250637  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:50.250970  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:50.251043  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:50.749768  146734 type.go:165] "Request Body" body=""
	I1222 22:56:50.749857  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:50.750193  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:51.250007  146734 type.go:165] "Request Body" body=""
	I1222 22:56:51.250097  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:51.250480  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:51.750285  146734 type.go:165] "Request Body" body=""
	I1222 22:56:51.750367  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:51.750694  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:52.250507  146734 type.go:165] "Request Body" body=""
	I1222 22:56:52.250580  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:52.250924  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:52.749846  146734 type.go:165] "Request Body" body=""
	I1222 22:56:52.749917  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:52.750231  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:52.750289  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:53.250037  146734 type.go:165] "Request Body" body=""
	I1222 22:56:53.250116  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:53.250450  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:53.750301  146734 type.go:165] "Request Body" body=""
	I1222 22:56:53.750379  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:53.750711  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:54.250570  146734 type.go:165] "Request Body" body=""
	I1222 22:56:54.250683  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:54.251037  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:54.749778  146734 type.go:165] "Request Body" body=""
	I1222 22:56:54.749870  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:54.750212  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:55.249925  146734 type.go:165] "Request Body" body=""
	I1222 22:56:55.250005  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:55.250325  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:55.250383  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:55.750056  146734 type.go:165] "Request Body" body=""
	I1222 22:56:55.750129  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:55.750431  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:56.250241  146734 type.go:165] "Request Body" body=""
	I1222 22:56:56.250315  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:56.250691  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:56.750506  146734 type.go:165] "Request Body" body=""
	I1222 22:56:56.750582  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:56.750942  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:57.249670  146734 type.go:165] "Request Body" body=""
	I1222 22:56:57.249740  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:57.250040  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:57.749841  146734 type.go:165] "Request Body" body=""
	I1222 22:56:57.749915  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:57.750251  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:57.750326  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:58.249993  146734 type.go:165] "Request Body" body=""
	I1222 22:56:58.250069  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:58.250401  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:58.749827  146734 type.go:165] "Request Body" body=""
	I1222 22:56:58.749905  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:58.750192  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:59.249907  146734 type.go:165] "Request Body" body=""
	I1222 22:56:59.249979  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:59.250306  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:59.750056  146734 type.go:165] "Request Body" body=""
	I1222 22:56:59.750136  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:59.750491  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:59.750565  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:57:00.250323  146734 type.go:165] "Request Body" body=""
	I1222 22:57:00.250408  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:00.250755  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:00.749631  146734 type.go:165] "Request Body" body=""
	I1222 22:57:00.749707  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:00.750021  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:01.249677  146734 type.go:165] "Request Body" body=""
	I1222 22:57:01.249762  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:01.250096  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:01.749829  146734 type.go:165] "Request Body" body=""
	I1222 22:57:01.749919  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:01.750234  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:02.249941  146734 type.go:165] "Request Body" body=""
	I1222 22:57:02.250014  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:02.250348  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:57:02.250421  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:57:02.750212  146734 type.go:165] "Request Body" body=""
	I1222 22:57:02.750300  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:02.750654  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:03.250450  146734 type.go:165] "Request Body" body=""
	I1222 22:57:03.250517  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:03.250851  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:03.749576  146734 type.go:165] "Request Body" body=""
	I1222 22:57:03.749665  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:03.749988  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:04.249767  146734 type.go:165] "Request Body" body=""
	I1222 22:57:04.249842  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:04.250173  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:04.750033  146734 type.go:165] "Request Body" body=""
	I1222 22:57:04.750117  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:04.750502  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:57:04.750569  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:57:05.250312  146734 type.go:165] "Request Body" body=""
	I1222 22:57:05.250398  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:05.250762  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:05.750575  146734 type.go:165] "Request Body" body=""
	I1222 22:57:05.750666  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:05.751012  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:06.249706  146734 type.go:165] "Request Body" body=""
	I1222 22:57:06.249781  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:06.250093  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:06.749823  146734 type.go:165] "Request Body" body=""
	I1222 22:57:06.749898  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:06.750282  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:07.250051  146734 type.go:165] "Request Body" body=""
	I1222 22:57:07.250124  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:07.250473  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:57:07.250533  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:57:07.750214  146734 type.go:165] "Request Body" body=""
	I1222 22:57:07.750298  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:07.750580  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:08.250395  146734 type.go:165] "Request Body" body=""
	I1222 22:57:08.250486  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:08.250871  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:08.749688  146734 type.go:165] "Request Body" body=""
	I1222 22:57:08.749763  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:08.750089  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:09.249707  146734 type.go:165] "Request Body" body=""
	I1222 22:57:09.249788  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:09.250145  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:09.749900  146734 type.go:165] "Request Body" body=""
	I1222 22:57:09.749983  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:09.750354  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:57:09.750431  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:57:10.250176  146734 type.go:165] "Request Body" body=""
	I1222 22:57:10.250252  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:10.250587  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:10.750397  146734 type.go:165] "Request Body" body=""
	I1222 22:57:10.750472  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:10.750832  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:11.249645  146734 type.go:165] "Request Body" body=""
	I1222 22:57:11.249739  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:11.250107  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:11.749864  146734 type.go:165] "Request Body" body=""
	I1222 22:57:11.749962  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:11.750316  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:12.249663  146734 type.go:165] "Request Body" body=""
	I1222 22:57:12.249748  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:12.250105  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:57:12.250178  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:57:12.750096  146734 type.go:165] "Request Body" body=""
	I1222 22:57:12.750174  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:12.750521  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:13.250403  146734 type.go:165] "Request Body" body=""
	I1222 22:57:13.250481  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:13.250854  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:13.749624  146734 type.go:165] "Request Body" body=""
	I1222 22:57:13.749717  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:13.750062  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:14.249768  146734 type.go:165] "Request Body" body=""
	I1222 22:57:14.249842  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:14.250173  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:57:14.250237  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:57:14.749931  146734 type.go:165] "Request Body" body=""
	I1222 22:57:14.750016  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:14.750331  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:15.250077  146734 type.go:165] "Request Body" body=""
	I1222 22:57:15.250149  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:15.250459  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:15.750281  146734 type.go:165] "Request Body" body=""
	I1222 22:57:15.750366  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:15.750687  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:16.250490  146734 type.go:165] "Request Body" body=""
	I1222 22:57:16.250582  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:16.250949  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:57:16.251012  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:57:16.749742  146734 type.go:165] "Request Body" body=""
	I1222 22:57:16.749829  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:16.750173  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:17.249915  146734 type.go:165] "Request Body" body=""
	I1222 22:57:17.250011  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:17.250323  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:17.750089  146734 type.go:165] "Request Body" body=""
	I1222 22:57:17.750167  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:17.750505  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:18.250322  146734 type.go:165] "Request Body" body=""
	I1222 22:57:18.250402  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:18.250802  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:18.749588  146734 type.go:165] "Request Body" body=""
	I1222 22:57:18.749719  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:18.750078  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:57:18.750142  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:57:19.249857  146734 type.go:165] "Request Body" body=""
	I1222 22:57:19.249933  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:19.250256  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:19.749770  146734 type.go:165] "Request Body" body=""
	I1222 22:57:19.749854  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:19.750208  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:20.249748  146734 type.go:165] "Request Body" body=""
	I1222 22:57:20.249831  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:20.250157  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:20.750100  146734 type.go:165] "Request Body" body=""
	I1222 22:57:20.750194  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:20.750870  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:57:20.750943  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:57:21.249638  146734 type.go:165] "Request Body" body=""
	I1222 22:57:21.249718  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:21.250054  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:21.749626  146734 type.go:165] "Request Body" body=""
	I1222 22:57:21.749701  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:21.750025  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:22.249683  146734 type.go:165] "Request Body" body=""
	I1222 22:57:22.249766  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:22.250110  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:22.750117  146734 node_ready.go:38] duration metric: took 6m0.000675026s for node "functional-384766" to be "Ready" ...
	I1222 22:57:22.752685  146734 out.go:203] 
	W1222 22:57:22.753745  146734 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1222 22:57:22.753760  146734 out.go:285] * 
	W1222 22:57:22.753991  146734 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 22:57:22.755053  146734 out.go:203] 
	
	
	==> Docker <==
	Dec 22 22:51:20 functional-384766 dockerd[9922]: time="2025-12-22T22:51:20.767074805Z" level=info msg="Loading containers: done."
	Dec 22 22:51:20 functional-384766 dockerd[9922]: time="2025-12-22T22:51:20.776636662Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 22 22:51:20 functional-384766 dockerd[9922]: time="2025-12-22T22:51:20.776667996Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 22 22:51:20 functional-384766 dockerd[9922]: time="2025-12-22T22:51:20.776702670Z" level=info msg="Initializing buildkit"
	Dec 22 22:51:20 functional-384766 dockerd[9922]: time="2025-12-22T22:51:20.795232210Z" level=info msg="Completed buildkit initialization"
	Dec 22 22:51:20 functional-384766 dockerd[9922]: time="2025-12-22T22:51:20.799403213Z" level=info msg="Daemon has completed initialization"
	Dec 22 22:51:20 functional-384766 dockerd[9922]: time="2025-12-22T22:51:20.799466264Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 22 22:51:20 functional-384766 dockerd[9922]: time="2025-12-22T22:51:20.799532589Z" level=info msg="API listen on [::]:2376"
	Dec 22 22:51:20 functional-384766 dockerd[9922]: time="2025-12-22T22:51:20.799493974Z" level=info msg="API listen on /run/docker.sock"
	Dec 22 22:51:20 functional-384766 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 22 22:51:20 functional-384766 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 22 22:51:20 functional-384766 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 22 22:51:20 functional-384766 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 22 22:51:21 functional-384766 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 22 22:51:21 functional-384766 cri-dockerd[10237]: time="2025-12-22T22:51:21Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 22 22:51:21 functional-384766 cri-dockerd[10237]: time="2025-12-22T22:51:21Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 22 22:51:21 functional-384766 cri-dockerd[10237]: time="2025-12-22T22:51:21Z" level=info msg="Start docker client with request timeout 0s"
	Dec 22 22:51:21 functional-384766 cri-dockerd[10237]: time="2025-12-22T22:51:21Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 22 22:51:21 functional-384766 cri-dockerd[10237]: time="2025-12-22T22:51:21Z" level=info msg="Loaded network plugin cni"
	Dec 22 22:51:21 functional-384766 cri-dockerd[10237]: time="2025-12-22T22:51:21Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 22 22:51:21 functional-384766 cri-dockerd[10237]: time="2025-12-22T22:51:21Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 22 22:51:21 functional-384766 cri-dockerd[10237]: time="2025-12-22T22:51:21Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 22 22:51:21 functional-384766 cri-dockerd[10237]: time="2025-12-22T22:51:21Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 22 22:51:21 functional-384766 cri-dockerd[10237]: time="2025-12-22T22:51:21Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 22 22:51:21 functional-384766 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:57:24.127311   16444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:57:24.127828   16444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:57:24.129379   16444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:57:24.129809   16444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:57:24.131315   16444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff da 9e 7f a3 27 cb 08 06
	[  +0.239045] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6e eb f7 fd 0a 48 08 06
	[  +0.170967] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 16 5a dc 65 fc cc 08 06
	[Dec22 22:37] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 66 cb ee 90 55 2b 08 06
	[  +0.000450] IPv4: martian source 10.244.0.32 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff be 43 50 0c dd 15 08 06
	[  +0.000658] IPv4: martian source 10.244.0.32 from 10.244.0.7, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e 41 3c 76 23 2b 08 06
	[  +1.709294] IPv4: martian source 10.244.0.31 from 10.244.0.26, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff be b6 30 85 5f 4e 08 06
	[  +0.532867] IPv4: martian source 10.244.0.26 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff be 43 50 0c dd 15 08 06
	[Dec22 22:39] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 46 b7 49 09 f9 e0 08 06
	[  +0.006417] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1e e5 c5 4f 67 2b 08 06
	[Dec22 22:40] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 22 2e 10 70 70 25 08 06
	[Dec22 22:41] IPv4: martian source 10.244.0.1 from 10.244.0.6, on dev eth0
	[  +0.000034] ll header: 00000000: ff ff ff ff ff ff ee d7 ae 32 ba c5 08 06
	[Dec22 22:42] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 95 cb 2f 8e 91 08 06
	
	
	==> kernel <==
	 22:57:24 up  2:39,  0 user,  load average: 0.72, 0.31, 0.57
	Linux functional-384766 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 22:57:20 functional-384766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 22:57:21 functional-384766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 807.
	Dec 22 22:57:21 functional-384766 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 22:57:21 functional-384766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 22:57:21 functional-384766 kubelet[16271]: E1222 22:57:21.287189   16271 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 22:57:21 functional-384766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 22:57:21 functional-384766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 22:57:21 functional-384766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 808.
	Dec 22 22:57:21 functional-384766 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 22:57:21 functional-384766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 22:57:22 functional-384766 kubelet[16282]: E1222 22:57:22.036547   16282 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 22:57:22 functional-384766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 22:57:22 functional-384766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 22:57:22 functional-384766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 809.
	Dec 22 22:57:22 functional-384766 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 22:57:22 functional-384766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 22:57:22 functional-384766 kubelet[16294]: E1222 22:57:22.797205   16294 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 22:57:22 functional-384766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 22:57:22 functional-384766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 22:57:23 functional-384766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 810.
	Dec 22 22:57:23 functional-384766 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 22:57:23 functional-384766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 22:57:23 functional-384766 kubelet[16319]: E1222 22:57:23.552489   16319 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 22:57:23 functional-384766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 22:57:23 functional-384766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-384766 -n functional-384766
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-384766 -n functional-384766: exit status 2 (307.005367ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-384766" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart (367.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods (1.78s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-384766 get po -A
functional_test.go:711: (dbg) Non-zero exit: kubectl --context functional-384766 get po -A: exit status 1 (45.108347ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:713: failed to get kubectl pods: args "kubectl --context functional-384766 get po -A" : exit status 1
functional_test.go:717: expected stderr to be empty but got *"The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?\n"*: args "kubectl --context functional-384766 get po -A"
functional_test.go:720: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-384766 get po -A"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-384766
helpers_test.go:244: (dbg) docker inspect functional-384766:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c",
	        "Created": "2025-12-22T22:43:03.818900502Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 134904,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T22:43:03.847527913Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9a87e850a5e640dd3e5f71477885272b970ba271e3722be8bebbe0157f704ffd",
	        "ResolvConfPath": "/var/lib/docker/containers/e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c/hostname",
	        "HostsPath": "/var/lib/docker/containers/e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c/hosts",
	        "LogPath": "/var/lib/docker/containers/e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c/e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c-json.log",
	        "Name": "/functional-384766",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-384766:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-384766",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c",
	                "LowerDir": "/var/lib/docker/overlay2/3e3d10c0ae87018d46767d6a2bb62611a8b9a288f6938e75c60f3cd57119d4bf-init/diff:/var/lib/docker/overlay2/c57dd1a41102d99c4ed6be3c60b871435428bd2cea6a3d8d172f0a67527ba009/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3e3d10c0ae87018d46767d6a2bb62611a8b9a288f6938e75c60f3cd57119d4bf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3e3d10c0ae87018d46767d6a2bb62611a8b9a288f6938e75c60f3cd57119d4bf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3e3d10c0ae87018d46767d6a2bb62611a8b9a288f6938e75c60f3cd57119d4bf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-384766",
	                "Source": "/var/lib/docker/volumes/functional-384766/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-384766",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-384766",
	                "name.minikube.sigs.k8s.io": "functional-384766",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d6f65d275ad1e1cfaea153f23b0c094464e089c27de9a12387045fa2c863e00e",
	            "SandboxKey": "/var/run/docker/netns/d6f65d275ad1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-384766": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1b177601c4f3a252e4feb1553da3a4110e40d5b9ed2bd5de6789f2bc9f8f5c2b",
	                    "EndpointID": "2c787f98c5d836612c102f7592dc2eccfef09327c2a6cadf1319fd6559b5eca8",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "d6:90:04:78:9b:e3",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-384766",
	                        "e126b999cc06"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-384766 -n functional-384766
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-384766 -n functional-384766: exit status 2 (290.242856ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                                           ARGS                                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ update-context │ functional-580825 update-context --alsologtostderr -v=2                                                                                                                                  │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ update-context │ functional-580825 update-context --alsologtostderr -v=2                                                                                                                                  │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ update-context │ functional-580825 update-context --alsologtostderr -v=2                                                                                                                                  │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image          │ functional-580825 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-580825 --alsologtostderr                                                              │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image          │ functional-580825 image ls                                                                                                                                                               │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image          │ functional-580825 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-580825 --alsologtostderr                                                              │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image          │ functional-580825 image ls                                                                                                                                                               │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image          │ functional-580825 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-580825 --alsologtostderr                                                              │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image          │ functional-580825 image ls                                                                                                                                                               │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image          │ functional-580825 image save ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-580825 /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image          │ functional-580825 image rm ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-580825 --alsologtostderr                                                                         │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image          │ functional-580825 image ls                                                                                                                                                               │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image          │ functional-580825 image load /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr                                                                     │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image          │ functional-580825 image ls                                                                                                                                                               │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image          │ functional-580825 image save --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-580825 --alsologtostderr                                                              │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ ssh            │ functional-580825 ssh pgrep buildkitd                                                                                                                                                    │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │                     │
	│ image          │ functional-580825 image ls --format json --alsologtostderr                                                                                                                               │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image          │ functional-580825 image ls --format short --alsologtostderr                                                                                                                              │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │                     │
	│ image          │ functional-580825 image ls --format yaml --alsologtostderr                                                                                                                               │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image          │ functional-580825 image ls --format table --alsologtostderr                                                                                                                              │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image          │ functional-580825 image build -t localhost/my-image:functional-580825 testdata/build --alsologtostderr                                                                                   │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image          │ functional-580825 image ls                                                                                                                                                               │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ delete         │ -p functional-580825                                                                                                                                                                     │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ start          │ -p functional-384766 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-rc.1                                        │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │                     │
	│ start          │ -p functional-384766 --alsologtostderr -v=8                                                                                                                                              │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 22:51 UTC │                     │
	└────────────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 22:51:17
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 22:51:17.565426  146734 out.go:360] Setting OutFile to fd 1 ...
	I1222 22:51:17.565716  146734 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 22:51:17.565727  146734 out.go:374] Setting ErrFile to fd 2...
	I1222 22:51:17.565732  146734 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 22:51:17.565972  146734 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-72233/.minikube/bin
	I1222 22:51:17.566463  146734 out.go:368] Setting JSON to false
	I1222 22:51:17.567434  146734 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":9218,"bootTime":1766434660,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1222 22:51:17.567486  146734 start.go:143] virtualization: kvm guest
	I1222 22:51:17.569465  146734 out.go:179] * [functional-384766] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1222 22:51:17.570460  146734 out.go:179]   - MINIKUBE_LOCATION=22301
	I1222 22:51:17.570465  146734 notify.go:221] Checking for updates...
	I1222 22:51:17.572456  146734 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 22:51:17.573608  146734 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1222 22:51:17.574791  146734 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-72233/.minikube
	I1222 22:51:17.575840  146734 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1222 22:51:17.576824  146734 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 22:51:17.578279  146734 config.go:182] Loaded profile config "functional-384766": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1222 22:51:17.578404  146734 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 22:51:17.602058  146734 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1222 22:51:17.602223  146734 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 22:51:17.652786  146734 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-12-22 22:51:17.644025132 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1222 22:51:17.652901  146734 docker.go:319] overlay module found
	I1222 22:51:17.655127  146734 out.go:179] * Using the docker driver based on existing profile
	I1222 22:51:17.656150  146734 start.go:309] selected driver: docker
	I1222 22:51:17.656165  146734 start.go:928] validating driver "docker" against &{Name:functional-384766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-384766 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 22:51:17.656249  146734 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 22:51:17.656337  146734 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 22:51:17.716062  146734 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-12-22 22:51:17.707507925 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1222 22:51:17.716919  146734 cni.go:84] Creating CNI manager for ""
	I1222 22:51:17.717012  146734 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1222 22:51:17.717085  146734 start.go:353] cluster config:
	{Name:functional-384766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-384766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 22:51:17.719515  146734 out.go:179] * Starting "functional-384766" primary control-plane node in "functional-384766" cluster
	I1222 22:51:17.720631  146734 cache.go:134] Beginning downloading kic base image for docker with docker
	I1222 22:51:17.721792  146734 out.go:179] * Pulling base image v0.0.48-1766394456-22288 ...
	I1222 22:51:17.723064  146734 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1222 22:51:17.723095  146734 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4
	I1222 22:51:17.723112  146734 cache.go:65] Caching tarball of preloaded images
	I1222 22:51:17.723172  146734 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local docker daemon
	I1222 22:51:17.723191  146734 preload.go:251] Found /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1222 22:51:17.723198  146734 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on docker
	I1222 22:51:17.723299  146734 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/config.json ...
	I1222 22:51:17.742349  146734 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local docker daemon, skipping pull
	I1222 22:51:17.742368  146734 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 exists in daemon, skipping load
	I1222 22:51:17.742396  146734 cache.go:243] Successfully downloaded all kic artifacts
	I1222 22:51:17.742444  146734 start.go:360] acquireMachinesLock for functional-384766: {Name:mk956fe60c71d3d96aa218ecf73d6e39f6ab1bf3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 22:51:17.742506  146734 start.go:364] duration metric: took 41.881µs to acquireMachinesLock for "functional-384766"
	I1222 22:51:17.742535  146734 start.go:96] Skipping create...Using existing machine configuration
	I1222 22:51:17.742545  146734 fix.go:54] fixHost starting: 
	I1222 22:51:17.742810  146734 cli_runner.go:164] Run: docker container inspect functional-384766 --format={{.State.Status}}
	I1222 22:51:17.759507  146734 fix.go:112] recreateIfNeeded on functional-384766: state=Running err=<nil>
	W1222 22:51:17.759531  146734 fix.go:138] unexpected machine state, will restart: <nil>
	I1222 22:51:17.761090  146734 out.go:252] * Updating the running docker "functional-384766" container ...
	I1222 22:51:17.761123  146734 machine.go:94] provisionDockerMachine start ...
	I1222 22:51:17.761180  146734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:51:17.778682  146734 main.go:144] libmachine: Using SSH client type: native
	I1222 22:51:17.778900  146734 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:51:17.778912  146734 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 22:51:17.919326  146734 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-384766
	
	I1222 22:51:17.919369  146734 ubuntu.go:182] provisioning hostname "functional-384766"
	I1222 22:51:17.919431  146734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:51:17.936992  146734 main.go:144] libmachine: Using SSH client type: native
	I1222 22:51:17.937221  146734 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:51:17.937234  146734 main.go:144] libmachine: About to run SSH command:
	sudo hostname functional-384766 && echo "functional-384766" | sudo tee /etc/hostname
	I1222 22:51:18.086470  146734 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-384766
	
	I1222 22:51:18.086564  146734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:51:18.104748  146734 main.go:144] libmachine: Using SSH client type: native
	I1222 22:51:18.105051  146734 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:51:18.105077  146734 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-384766' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-384766/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-384766' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 22:51:18.246730  146734 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 22:51:18.246760  146734 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22301-72233/.minikube CaCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22301-72233/.minikube}
	I1222 22:51:18.246782  146734 ubuntu.go:190] setting up certificates
	I1222 22:51:18.246792  146734 provision.go:84] configureAuth start
	I1222 22:51:18.246854  146734 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-384766
	I1222 22:51:18.265782  146734 provision.go:143] copyHostCerts
	I1222 22:51:18.265828  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem
	I1222 22:51:18.265879  146734 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem, removing ...
	I1222 22:51:18.265900  146734 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem
	I1222 22:51:18.266005  146734 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem (1082 bytes)
	I1222 22:51:18.266139  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem
	I1222 22:51:18.266163  146734 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem, removing ...
	I1222 22:51:18.266175  146734 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem
	I1222 22:51:18.266220  146734 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem (1123 bytes)
	I1222 22:51:18.266317  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem
	I1222 22:51:18.266344  146734 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem, removing ...
	I1222 22:51:18.266355  146734 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem
	I1222 22:51:18.266400  146734 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem (1679 bytes)
	I1222 22:51:18.266499  146734 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem org=jenkins.functional-384766 san=[127.0.0.1 192.168.49.2 functional-384766 localhost minikube]
	I1222 22:51:18.330118  146734 provision.go:177] copyRemoteCerts
	I1222 22:51:18.330177  146734 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 22:51:18.330210  146734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:51:18.347420  146734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
	I1222 22:51:18.447556  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1222 22:51:18.447646  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 22:51:18.464129  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1222 22:51:18.464180  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 22:51:18.480702  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1222 22:51:18.480757  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1222 22:51:18.496998  146734 provision.go:87] duration metric: took 250.195084ms to configureAuth
	I1222 22:51:18.497021  146734 ubuntu.go:206] setting minikube options for container-runtime
	I1222 22:51:18.497168  146734 config.go:182] Loaded profile config "functional-384766": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1222 22:51:18.497218  146734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:51:18.514380  146734 main.go:144] libmachine: Using SSH client type: native
	I1222 22:51:18.514623  146734 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:51:18.514636  146734 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1222 22:51:18.655354  146734 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1222 22:51:18.655383  146734 ubuntu.go:71] root file system type: overlay
	I1222 22:51:18.655533  146734 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1222 22:51:18.655634  146734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:51:18.673540  146734 main.go:144] libmachine: Using SSH client type: native
	I1222 22:51:18.673819  146734 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:51:18.673915  146734 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1222 22:51:18.823487  146734 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1222 22:51:18.823601  146734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:51:18.841347  146734 main.go:144] libmachine: Using SSH client type: native
	I1222 22:51:18.841608  146734 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:51:18.841639  146734 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1222 22:51:18.987007  146734 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 22:51:18.987042  146734 machine.go:97] duration metric: took 1.225905804s to provisionDockerMachine
	I1222 22:51:18.987059  146734 start.go:293] postStartSetup for "functional-384766" (driver="docker")
	I1222 22:51:18.987075  146734 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 22:51:18.987145  146734 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 22:51:18.987199  146734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:51:19.006696  146734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
	I1222 22:51:19.107530  146734 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 22:51:19.110931  146734 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1222 22:51:19.110952  146734 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1222 22:51:19.110959  146734 command_runner.go:130] > VERSION_ID="12"
	I1222 22:51:19.110964  146734 command_runner.go:130] > VERSION="12 (bookworm)"
	I1222 22:51:19.110979  146734 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1222 22:51:19.110985  146734 command_runner.go:130] > ID=debian
	I1222 22:51:19.110992  146734 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1222 22:51:19.111000  146734 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1222 22:51:19.111012  146734 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1222 22:51:19.111100  146734 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 22:51:19.111124  146734 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 22:51:19.111137  146734 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-72233/.minikube/addons for local assets ...
	I1222 22:51:19.111205  146734 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-72233/.minikube/files for local assets ...
	I1222 22:51:19.111317  146734 filesync.go:149] local asset: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem -> 758032.pem in /etc/ssl/certs
	I1222 22:51:19.111330  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem -> /etc/ssl/certs/758032.pem
	I1222 22:51:19.111426  146734 filesync.go:149] local asset: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/test/nested/copy/75803/hosts -> hosts in /etc/test/nested/copy/75803
	I1222 22:51:19.111434  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/test/nested/copy/75803/hosts -> /etc/test/nested/copy/75803/hosts
	I1222 22:51:19.111495  146734 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/75803
	I1222 22:51:19.119122  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem --> /etc/ssl/certs/758032.pem (1708 bytes)
	I1222 22:51:19.135900  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/test/nested/copy/75803/hosts --> /etc/test/nested/copy/75803/hosts (40 bytes)
	I1222 22:51:19.152438  146734 start.go:296] duration metric: took 165.360222ms for postStartSetup
	I1222 22:51:19.152512  146734 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 22:51:19.152568  146734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:51:19.170181  146734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
	I1222 22:51:19.267525  146734 command_runner.go:130] > 37%
	I1222 22:51:19.267628  146734 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 22:51:19.272133  146734 command_runner.go:130] > 185G
	I1222 22:51:19.272164  146734 fix.go:56] duration metric: took 1.529618595s for fixHost
	I1222 22:51:19.272178  146734 start.go:83] releasing machines lock for "functional-384766", held for 1.529658247s
	I1222 22:51:19.272243  146734 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-384766
	I1222 22:51:19.290506  146734 ssh_runner.go:195] Run: cat /version.json
	I1222 22:51:19.290562  146734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:51:19.290583  146734 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 22:51:19.290685  146734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:51:19.307884  146734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
	I1222 22:51:19.308688  146734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
	I1222 22:51:19.461522  146734 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1222 22:51:19.463216  146734 command_runner.go:130] > {"iso_version": "v1.37.0-1766254259-22261", "kicbase_version": "v0.0.48-1766394456-22288", "minikube_version": "v1.37.0", "commit": "069cfc84263169a672fdad8d37486b5cb35673ac"}
	I1222 22:51:19.463366  146734 ssh_runner.go:195] Run: systemctl --version
	I1222 22:51:19.469697  146734 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1222 22:51:19.469761  146734 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1222 22:51:19.469847  146734 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1222 22:51:19.474292  146734 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1222 22:51:19.474367  146734 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 22:51:19.474416  146734 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 22:51:19.482031  146734 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1222 22:51:19.482056  146734 start.go:496] detecting cgroup driver to use...
	I1222 22:51:19.482091  146734 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 22:51:19.482215  146734 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 22:51:19.495227  146734 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1222 22:51:19.495298  146734 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1222 22:51:19.503438  146734 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1222 22:51:19.511525  146734 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1222 22:51:19.511574  146734 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1222 22:51:19.519676  146734 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 22:51:19.527517  146734 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1222 22:51:19.535615  146734 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 22:51:19.543569  146734 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 22:51:19.550965  146734 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1222 22:51:19.559037  146734 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1222 22:51:19.567079  146734 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1222 22:51:19.575222  146734 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 22:51:19.582102  146734 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1222 22:51:19.582154  146734 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 22:51:19.588882  146734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 22:51:19.668907  146734 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1222 22:51:19.740882  146734 start.go:496] detecting cgroup driver to use...
	I1222 22:51:19.740926  146734 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 22:51:19.740967  146734 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1222 22:51:19.753727  146734 command_runner.go:130] > # /lib/systemd/system/docker.service
	I1222 22:51:19.753762  146734 command_runner.go:130] > [Unit]
	I1222 22:51:19.753770  146734 command_runner.go:130] > Description=Docker Application Container Engine
	I1222 22:51:19.753778  146734 command_runner.go:130] > Documentation=https://docs.docker.com
	I1222 22:51:19.753787  146734 command_runner.go:130] > After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	I1222 22:51:19.753797  146734 command_runner.go:130] > Wants=network-online.target containerd.service
	I1222 22:51:19.753808  146734 command_runner.go:130] > Requires=docker.socket
	I1222 22:51:19.753815  146734 command_runner.go:130] > StartLimitBurst=3
	I1222 22:51:19.753825  146734 command_runner.go:130] > StartLimitIntervalSec=60
	I1222 22:51:19.753833  146734 command_runner.go:130] > [Service]
	I1222 22:51:19.753841  146734 command_runner.go:130] > Type=notify
	I1222 22:51:19.753848  146734 command_runner.go:130] > Restart=always
	I1222 22:51:19.753862  146734 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1222 22:51:19.753882  146734 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1222 22:51:19.753896  146734 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1222 22:51:19.753910  146734 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1222 22:51:19.753923  146734 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1222 22:51:19.753937  146734 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1222 22:51:19.753952  146734 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1222 22:51:19.753969  146734 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1222 22:51:19.753983  146734 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1222 22:51:19.753991  146734 command_runner.go:130] > ExecStart=
	I1222 22:51:19.754018  146734 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I1222 22:51:19.754031  146734 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1222 22:51:19.754046  146734 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1222 22:51:19.754060  146734 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1222 22:51:19.754067  146734 command_runner.go:130] > LimitNOFILE=infinity
	I1222 22:51:19.754076  146734 command_runner.go:130] > LimitNPROC=infinity
	I1222 22:51:19.754084  146734 command_runner.go:130] > LimitCORE=infinity
	I1222 22:51:19.754095  146734 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1222 22:51:19.754107  146734 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1222 22:51:19.754115  146734 command_runner.go:130] > TasksMax=infinity
	I1222 22:51:19.754124  146734 command_runner.go:130] > TimeoutStartSec=0
	I1222 22:51:19.754137  146734 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1222 22:51:19.754152  146734 command_runner.go:130] > Delegate=yes
	I1222 22:51:19.754162  146734 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1222 22:51:19.754171  146734 command_runner.go:130] > KillMode=process
	I1222 22:51:19.754179  146734 command_runner.go:130] > OOMScoreAdjust=-500
	I1222 22:51:19.754187  146734 command_runner.go:130] > [Install]
	I1222 22:51:19.754196  146734 command_runner.go:130] > WantedBy=multi-user.target
	I1222 22:51:19.754834  146734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 22:51:19.766639  146734 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1222 22:51:19.781290  146734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 22:51:19.792290  146734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1222 22:51:19.803490  146734 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 22:51:19.815697  146734 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1222 22:51:19.816642  146734 ssh_runner.go:195] Run: which cri-dockerd
	I1222 22:51:19.820210  146734 command_runner.go:130] > /usr/bin/cri-dockerd
	I1222 22:51:19.820315  146734 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1222 22:51:19.827693  146734 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1222 22:51:19.839649  146734 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1222 22:51:19.921176  146734 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1222 22:51:20.004043  146734 docker.go:578] configuring docker to use "cgroupfs" as cgroup driver...
	I1222 22:51:20.004160  146734 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1222 22:51:20.017007  146734 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1222 22:51:20.028524  146734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 22:51:20.107815  146734 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1222 22:51:20.801234  146734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 22:51:20.813428  146734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1222 22:51:20.824782  146734 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1222 22:51:20.839450  146734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1222 22:51:20.850829  146734 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1222 22:51:20.931099  146734 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1222 22:51:21.012149  146734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 22:51:21.092742  146734 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1222 22:51:21.120647  146734 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1222 22:51:21.132196  146734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 22:51:21.256485  146734 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1222 22:51:21.327564  146734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1222 22:51:21.340042  146734 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1222 22:51:21.340117  146734 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1222 22:51:21.343842  146734 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1222 22:51:21.343869  146734 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1222 22:51:21.343877  146734 command_runner.go:130] > Device: 0,75	Inode: 1744        Links: 1
	I1222 22:51:21.343888  146734 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  997/  docker)
	I1222 22:51:21.343895  146734 command_runner.go:130] > Access: 2025-12-22 22:51:21.266838495 +0000
	I1222 22:51:21.343909  146734 command_runner.go:130] > Modify: 2025-12-22 22:51:21.266838495 +0000
	I1222 22:51:21.343924  146734 command_runner.go:130] > Change: 2025-12-22 22:51:21.279839753 +0000
	I1222 22:51:21.343935  146734 command_runner.go:130] >  Birth: 2025-12-22 22:51:21.266838495 +0000
	I1222 22:51:21.343976  146734 start.go:564] Will wait 60s for crictl version
	I1222 22:51:21.344020  146734 ssh_runner.go:195] Run: which crictl
	I1222 22:51:21.347282  146734 command_runner.go:130] > /usr/local/bin/crictl
	I1222 22:51:21.347341  146734 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 22:51:21.370719  146734 command_runner.go:130] > Version:  0.1.0
	I1222 22:51:21.370739  146734 command_runner.go:130] > RuntimeName:  docker
	I1222 22:51:21.370743  146734 command_runner.go:130] > RuntimeVersion:  29.1.3
	I1222 22:51:21.370748  146734 command_runner.go:130] > RuntimeApiVersion:  v1
	I1222 22:51:21.370764  146734 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1222 22:51:21.370812  146734 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1222 22:51:21.395767  146734 command_runner.go:130] > 29.1.3
	I1222 22:51:21.395836  146734 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1222 22:51:21.418820  146734 command_runner.go:130] > 29.1.3
	I1222 22:51:21.422122  146734 out.go:252] * Preparing Kubernetes v1.35.0-rc.1 on Docker 29.1.3 ...
	I1222 22:51:21.422206  146734 cli_runner.go:164] Run: docker network inspect functional-384766 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 22:51:21.439338  146734 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1222 22:51:21.443526  146734 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1222 22:51:21.443628  146734 kubeadm.go:884] updating cluster {Name:functional-384766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-384766 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1222 22:51:21.443753  146734 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1222 22:51:21.443822  146734 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1222 22:51:21.464281  146734 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1222 22:51:21.464308  146734 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1222 22:51:21.464318  146734 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1222 22:51:21.464325  146734 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1222 22:51:21.464332  146734 command_runner.go:130] > registry.k8s.io/etcd:3.6.6-0
	I1222 22:51:21.464340  146734 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1222 22:51:21.464348  146734 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1222 22:51:21.464366  146734 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 22:51:21.464395  146734 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1222 22:51:21.464407  146734 docker.go:624] Images already preloaded, skipping extraction
	I1222 22:51:21.464455  146734 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1222 22:51:21.482666  146734 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1222 22:51:21.482684  146734 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1222 22:51:21.482690  146734 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1222 22:51:21.482697  146734 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1222 22:51:21.482704  146734 command_runner.go:130] > registry.k8s.io/etcd:3.6.6-0
	I1222 22:51:21.482712  146734 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1222 22:51:21.482729  146734 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1222 22:51:21.482739  146734 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 22:51:21.483998  146734 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1222 22:51:21.484022  146734 cache_images.go:86] Images are preloaded, skipping loading
	I1222 22:51:21.484036  146734 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 docker true true} ...
	I1222 22:51:21.484172  146734 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-384766 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-384766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 22:51:21.484238  146734 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1222 22:51:21.532066  146734 command_runner.go:130] > cgroupfs
	I1222 22:51:21.533783  146734 cni.go:84] Creating CNI manager for ""
	I1222 22:51:21.533808  146734 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1222 22:51:21.533825  146734 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 22:51:21.533845  146734 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-384766 NodeName:functional-384766 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 22:51:21.533961  146734 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-384766"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 22:51:21.534020  146734 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 22:51:21.542124  146734 command_runner.go:130] > kubeadm
	I1222 22:51:21.542141  146734 command_runner.go:130] > kubectl
	I1222 22:51:21.542144  146734 command_runner.go:130] > kubelet
	I1222 22:51:21.542165  146734 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 22:51:21.542214  146734 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 22:51:21.549624  146734 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1222 22:51:21.561393  146734 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 22:51:21.572932  146734 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I1222 22:51:21.584412  146734 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1222 22:51:21.587798  146734 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1222 22:51:21.587903  146734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 22:51:21.667778  146734 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 22:51:21.997732  146734 certs.go:69] Setting up /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766 for IP: 192.168.49.2
	I1222 22:51:21.997755  146734 certs.go:195] generating shared ca certs ...
	I1222 22:51:21.997774  146734 certs.go:227] acquiring lock for ca certs: {Name:mk952cc8302daab7c0050aedd5db4002f6808128 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 22:51:21.997942  146734 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.key
	I1222 22:51:21.998024  146734 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.key
	I1222 22:51:21.998042  146734 certs.go:257] generating profile certs ...
	I1222 22:51:21.998184  146734 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.key
	I1222 22:51:21.998247  146734 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/apiserver.key.c9e079a8
	I1222 22:51:21.998298  146734 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/proxy-client.key
	I1222 22:51:21.998317  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1222 22:51:21.998340  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1222 22:51:21.998365  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1222 22:51:21.998382  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1222 22:51:21.998399  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1222 22:51:21.998418  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1222 22:51:21.998436  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1222 22:51:21.998454  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1222 22:51:21.998527  146734 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803.pem (1338 bytes)
	W1222 22:51:21.998578  146734 certs.go:480] ignoring /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803_empty.pem, impossibly tiny 0 bytes
	I1222 22:51:21.998635  146734 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem (1675 bytes)
	I1222 22:51:21.998684  146734 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem (1082 bytes)
	I1222 22:51:21.998717  146734 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem (1123 bytes)
	I1222 22:51:21.998750  146734 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem (1679 bytes)
	I1222 22:51:21.998813  146734 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem (1708 bytes)
	I1222 22:51:21.998854  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem -> /usr/share/ca-certificates/758032.pem
	I1222 22:51:21.998877  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1222 22:51:21.998896  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803.pem -> /usr/share/ca-certificates/75803.pem
	I1222 22:51:21.999493  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 22:51:22.018141  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 22:51:22.036416  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 22:51:22.053080  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 22:51:22.069323  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 22:51:22.085369  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 22:51:22.101485  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 22:51:22.117634  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1222 22:51:22.133612  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem --> /usr/share/ca-certificates/758032.pem (1708 bytes)
	I1222 22:51:22.150125  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 22:51:22.166578  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803.pem --> /usr/share/ca-certificates/75803.pem (1338 bytes)
	I1222 22:51:22.182911  146734 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 22:51:22.194486  146734 ssh_runner.go:195] Run: openssl version
	I1222 22:51:22.199935  146734 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1222 22:51:22.200169  146734 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 22:51:22.206913  146734 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 22:51:22.213732  146734 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 22:51:22.217037  146734 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 22 22:33 /usr/share/ca-certificates/minikubeCA.pem
	I1222 22:51:22.217075  146734 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 22:33 /usr/share/ca-certificates/minikubeCA.pem
	I1222 22:51:22.217111  146734 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 22:51:22.249675  146734 command_runner.go:130] > b5213941
	I1222 22:51:22.250033  146734 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 22:51:22.257095  146734 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/75803.pem
	I1222 22:51:22.264071  146734 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/75803.pem /etc/ssl/certs/75803.pem
	I1222 22:51:22.271042  146734 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75803.pem
	I1222 22:51:22.274411  146734 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 22 22:42 /usr/share/ca-certificates/75803.pem
	I1222 22:51:22.274445  146734 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 22:42 /usr/share/ca-certificates/75803.pem
	I1222 22:51:22.274483  146734 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75803.pem
	I1222 22:51:22.307772  146734 command_runner.go:130] > 51391683
	I1222 22:51:22.308113  146734 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 22:51:22.315176  146734 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/758032.pem
	I1222 22:51:22.322196  146734 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/758032.pem /etc/ssl/certs/758032.pem
	I1222 22:51:22.329109  146734 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/758032.pem
	I1222 22:51:22.332667  146734 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 22 22:42 /usr/share/ca-certificates/758032.pem
	I1222 22:51:22.332691  146734 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 22:42 /usr/share/ca-certificates/758032.pem
	I1222 22:51:22.332732  146734 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/758032.pem
	I1222 22:51:22.365940  146734 command_runner.go:130] > 3ec20f2e
	I1222 22:51:22.366181  146734 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 22:51:22.373802  146734 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 22:51:22.377513  146734 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 22:51:22.377537  146734 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1222 22:51:22.377543  146734 command_runner.go:130] > Device: 8,1	Inode: 809094      Links: 1
	I1222 22:51:22.377550  146734 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1222 22:51:22.377558  146734 command_runner.go:130] > Access: 2025-12-22 22:47:15.370061162 +0000
	I1222 22:51:22.377566  146734 command_runner.go:130] > Modify: 2025-12-22 22:43:13.446668027 +0000
	I1222 22:51:22.377574  146734 command_runner.go:130] > Change: 2025-12-22 22:43:13.446668027 +0000
	I1222 22:51:22.377602  146734 command_runner.go:130] >  Birth: 2025-12-22 22:43:13.446668027 +0000
	I1222 22:51:22.377678  146734 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1222 22:51:22.411266  146734 command_runner.go:130] > Certificate will not expire
	I1222 22:51:22.411570  146734 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1222 22:51:22.445025  146734 command_runner.go:130] > Certificate will not expire
	I1222 22:51:22.445322  146734 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1222 22:51:22.479095  146734 command_runner.go:130] > Certificate will not expire
	I1222 22:51:22.479395  146734 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1222 22:51:22.512263  146734 command_runner.go:130] > Certificate will not expire
	I1222 22:51:22.512537  146734 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1222 22:51:22.545264  146734 command_runner.go:130] > Certificate will not expire
	I1222 22:51:22.545554  146734 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1222 22:51:22.578867  146734 command_runner.go:130] > Certificate will not expire
	I1222 22:51:22.579164  146734 kubeadm.go:401] StartCluster: {Name:functional-384766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-384766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 22:51:22.579364  146734 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1222 22:51:22.598061  146734 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 22:51:22.605833  146734 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1222 22:51:22.605851  146734 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1222 22:51:22.605860  146734 command_runner.go:130] > /var/lib/minikube/etcd:
	I1222 22:51:22.605880  146734 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1222 22:51:22.605891  146734 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1222 22:51:22.605932  146734 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1222 22:51:22.613011  146734 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1222 22:51:22.613379  146734 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-384766" does not appear in /home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1222 22:51:22.613493  146734 kubeconfig.go:62] /home/jenkins/minikube-integration/22301-72233/kubeconfig needs updating (will repair): [kubeconfig missing "functional-384766" cluster setting kubeconfig missing "functional-384766" context setting]
	I1222 22:51:22.613840  146734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/kubeconfig: {Name:mkabb5ea92c3fe748f610038efb5c58128364c71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 22:51:22.614238  146734 loader.go:405] Config loaded from file:  /home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1222 22:51:22.614401  146734 kapi.go:59] client config for functional-384766: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.crt", KeyFile:"/home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.key", CAFile:"/home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2765fe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1222 22:51:22.614887  146734 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1222 22:51:22.614906  146734 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1222 22:51:22.614915  146734 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=true
	I1222 22:51:22.614921  146734 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=true
	I1222 22:51:22.614926  146734 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=true
	I1222 22:51:22.614933  146734 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1222 22:51:22.614941  146734 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1222 22:51:22.615340  146734 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1222 22:51:22.622321  146734 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1222 22:51:22.622350  146734 kubeadm.go:602] duration metric: took 16.45181ms to restartPrimaryControlPlane
	I1222 22:51:22.622360  146734 kubeadm.go:403] duration metric: took 43.204719ms to StartCluster
	I1222 22:51:22.622376  146734 settings.go:142] acquiring lock: {Name:mk05aa406dacdbba79fec0b7e7f355491ea46bf8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 22:51:22.622430  146734 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1222 22:51:22.622875  146734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/kubeconfig: {Name:mkabb5ea92c3fe748f610038efb5c58128364c71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 22:51:22.623066  146734 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1222 22:51:22.623138  146734 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1222 22:51:22.623233  146734 addons.go:70] Setting storage-provisioner=true in profile "functional-384766"
	I1222 22:51:22.623261  146734 addons.go:239] Setting addon storage-provisioner=true in "functional-384766"
	I1222 22:51:22.623284  146734 config.go:182] Loaded profile config "functional-384766": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1222 22:51:22.623296  146734 host.go:66] Checking if "functional-384766" exists ...
	I1222 22:51:22.623288  146734 addons.go:70] Setting default-storageclass=true in profile "functional-384766"
	I1222 22:51:22.623322  146734 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-384766"
	I1222 22:51:22.623660  146734 cli_runner.go:164] Run: docker container inspect functional-384766 --format={{.State.Status}}
	I1222 22:51:22.623809  146734 cli_runner.go:164] Run: docker container inspect functional-384766 --format={{.State.Status}}
	I1222 22:51:22.624438  146734 out.go:179] * Verifying Kubernetes components...
	I1222 22:51:22.625531  146734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 22:51:22.644170  146734 loader.go:405] Config loaded from file:  /home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1222 22:51:22.644380  146734 kapi.go:59] client config for functional-384766: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.crt", KeyFile:"/home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.key", CAFile:"/home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2765fe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1222 22:51:22.644456  146734 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 22:51:22.644766  146734 addons.go:239] Setting addon default-storageclass=true in "functional-384766"
	I1222 22:51:22.644810  146734 host.go:66] Checking if "functional-384766" exists ...
	I1222 22:51:22.645336  146734 cli_runner.go:164] Run: docker container inspect functional-384766 --format={{.State.Status}}
	I1222 22:51:22.645513  146734 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:51:22.645531  146734 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1222 22:51:22.645584  146734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:51:22.667387  146734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
	I1222 22:51:22.668028  146734 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1222 22:51:22.668061  146734 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1222 22:51:22.668129  146734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:51:22.686127  146734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
	I1222 22:51:22.735817  146734 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 22:51:22.749391  146734 node_ready.go:35] waiting up to 6m0s for node "functional-384766" to be "Ready" ...
	I1222 22:51:22.749553  146734 type.go:165] "Request Body" body=""
	I1222 22:51:22.749681  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:22.749924  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:22.791529  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:51:22.791702  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:51:22.858228  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:22.858293  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:22.858334  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:22.858349  146734 retry.go:84] will retry after 300ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:22.860247  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:23.114793  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:51:23.124266  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:51:23.170075  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:23.170134  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:23.179073  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:23.179145  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:23.250305  146734 type.go:165] "Request Body" body=""
	I1222 22:51:23.250418  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:23.250774  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:23.384101  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:51:23.434813  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:23.434866  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:23.600155  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:51:23.651352  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:23.651412  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:23.749655  146734 type.go:165] "Request Body" body=""
	I1222 22:51:23.749735  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:23.750072  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:23.901355  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:51:23.952200  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:23.952267  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:24.239666  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:51:24.250121  146734 type.go:165] "Request Body" body=""
	I1222 22:51:24.250189  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:24.250430  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:24.294448  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:24.294492  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:24.750059  146734 type.go:165] "Request Body" body=""
	I1222 22:51:24.750149  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:24.750512  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:24.750582  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:24.937883  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:51:24.989534  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:24.989576  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:25.250004  146734 type.go:165] "Request Body" body=""
	I1222 22:51:25.250083  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:25.250431  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:25.372773  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:51:25.425171  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:25.425216  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:25.749629  146734 type.go:165] "Request Body" body=""
	I1222 22:51:25.749702  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:25.750010  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:26.170572  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:51:26.222069  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:26.222131  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:26.250327  146734 type.go:165] "Request Body" body=""
	I1222 22:51:26.250414  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:26.250759  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:26.440137  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:51:26.491948  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:26.492006  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:26.750438  146734 type.go:165] "Request Body" body=""
	I1222 22:51:26.750538  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:26.750885  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:26.750943  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:27.250541  146734 type.go:165] "Request Body" body=""
	I1222 22:51:27.250646  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:27.250950  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:27.355175  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:51:27.403566  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:27.406149  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:27.749989  146734 type.go:165] "Request Body" body=""
	I1222 22:51:27.750066  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:27.750396  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:28.250002  146734 type.go:165] "Request Body" body=""
	I1222 22:51:28.250075  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:28.250397  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:28.438810  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:51:28.487114  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:28.489616  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:28.750061  146734 type.go:165] "Request Body" body=""
	I1222 22:51:28.750134  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:28.750419  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:29.250032  146734 type.go:165] "Request Body" body=""
	I1222 22:51:29.250106  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:29.250445  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:29.250522  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:29.750041  146734 type.go:165] "Request Body" body=""
	I1222 22:51:29.750138  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:29.750509  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:30.249736  146734 type.go:165] "Request Body" body=""
	I1222 22:51:30.249807  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:30.250111  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:30.636760  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:51:30.689934  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:30.689988  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:30.750216  146734 type.go:165] "Request Body" body=""
	I1222 22:51:30.750316  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:30.750667  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:31.250328  146734 type.go:165] "Request Body" body=""
	I1222 22:51:31.250434  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:31.250799  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:31.250876  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:31.750450  146734 type.go:165] "Request Body" body=""
	I1222 22:51:31.750530  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:31.750869  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:32.250483  146734 type.go:165] "Request Body" body=""
	I1222 22:51:32.250580  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:32.250950  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:32.711876  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:51:32.750368  146734 type.go:165] "Request Body" body=""
	I1222 22:51:32.750445  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:32.750774  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:32.760899  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:32.763771  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:33.250469  146734 type.go:165] "Request Body" body=""
	I1222 22:51:33.250543  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:33.250856  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:33.250917  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:33.406152  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:51:33.457687  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:33.457745  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:33.750192  146734 type.go:165] "Request Body" body=""
	I1222 22:51:33.750291  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:33.750643  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:34.250274  146734 type.go:165] "Request Body" body=""
	I1222 22:51:34.250352  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:34.250676  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:34.749812  146734 type.go:165] "Request Body" body=""
	I1222 22:51:34.749877  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:34.750166  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:35.249755  146734 type.go:165] "Request Body" body=""
	I1222 22:51:35.249850  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:35.250178  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:35.516575  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:51:35.570400  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:35.570450  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:35.749757  146734 type.go:165] "Request Body" body=""
	I1222 22:51:35.749831  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:35.750172  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:35.750238  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:36.249789  146734 type.go:165] "Request Body" body=""
	I1222 22:51:36.249888  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:36.250250  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:36.749817  146734 type.go:165] "Request Body" body=""
	I1222 22:51:36.749889  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:36.750217  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:37.249837  146734 type.go:165] "Request Body" body=""
	I1222 22:51:37.249921  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:37.250262  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:37.750101  146734 type.go:165] "Request Body" body=""
	I1222 22:51:37.750202  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:37.750527  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:37.750609  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:38.250259  146734 type.go:165] "Request Body" body=""
	I1222 22:51:38.250333  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:38.250692  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:38.358924  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:51:38.409955  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:38.410034  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:38.750557  146734 type.go:165] "Request Body" body=""
	I1222 22:51:38.750654  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:38.750998  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:39.249528  146734 type.go:165] "Request Body" body=""
	I1222 22:51:39.249647  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:39.249920  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:39.749563  146734 type.go:165] "Request Body" body=""
	I1222 22:51:39.749697  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:39.750029  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:40.249635  146734 type.go:165] "Request Body" body=""
	I1222 22:51:40.249710  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:40.250037  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:40.250107  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:40.749663  146734 type.go:165] "Request Body" body=""
	I1222 22:51:40.749734  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:40.750058  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:41.249687  146734 type.go:165] "Request Body" body=""
	I1222 22:51:41.249766  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:41.250113  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:41.749718  146734 type.go:165] "Request Body" body=""
	I1222 22:51:41.749834  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:41.750194  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:42.249756  146734 type.go:165] "Request Body" body=""
	I1222 22:51:42.249861  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:42.250194  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:42.250268  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:42.750151  146734 type.go:165] "Request Body" body=""
	I1222 22:51:42.750272  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:42.750674  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:43.250325  146734 type.go:165] "Request Body" body=""
	I1222 22:51:43.250412  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:43.250779  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:43.750422  146734 type.go:165] "Request Body" body=""
	I1222 22:51:43.750532  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:43.750837  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:44.250504  146734 type.go:165] "Request Body" body=""
	I1222 22:51:44.250574  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:44.250924  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:44.250995  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:44.670446  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:51:44.719419  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:44.722302  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:44.722343  146734 retry.go:84] will retry after 11.9s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:44.750527  146734 type.go:165] "Request Body" body=""
	I1222 22:51:44.750632  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:44.750954  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:45.250633  146734 type.go:165] "Request Body" body=""
	I1222 22:51:45.250718  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:45.251044  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:45.749665  146734 type.go:165] "Request Body" body=""
	I1222 22:51:45.749749  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:45.750104  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:46.249725  146734 type.go:165] "Request Body" body=""
	I1222 22:51:46.249806  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:46.250136  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:46.749687  146734 type.go:165] "Request Body" body=""
	I1222 22:51:46.749756  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:46.750050  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:46.750108  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:47.249647  146734 type.go:165] "Request Body" body=""
	I1222 22:51:47.249753  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:47.250081  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:47.750272  146734 type.go:165] "Request Body" body=""
	I1222 22:51:47.750344  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:47.750626  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:48.250351  146734 type.go:165] "Request Body" body=""
	I1222 22:51:48.250445  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:48.250816  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:48.750455  146734 type.go:165] "Request Body" body=""
	I1222 22:51:48.750540  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:48.750902  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:48.750964  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:49.250555  146734 type.go:165] "Request Body" body=""
	I1222 22:51:49.250653  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:49.250985  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:49.750603  146734 type.go:165] "Request Body" body=""
	I1222 22:51:49.750681  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:49.750999  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:50.249551  146734 type.go:165] "Request Body" body=""
	I1222 22:51:50.249641  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:50.249968  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:50.750576  146734 type.go:165] "Request Body" body=""
	I1222 22:51:50.750686  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:50.751008  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:50.751079  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:51.249557  146734 type.go:165] "Request Body" body=""
	I1222 22:51:51.249656  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:51.249983  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:51.749684  146734 type.go:165] "Request Body" body=""
	I1222 22:51:51.749763  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:51.750094  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:52.249698  146734 type.go:165] "Request Body" body=""
	I1222 22:51:52.249792  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:52.250118  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:52.572783  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:51:52.624461  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:52.624509  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:52.624539  146734 retry.go:84] will retry after 8.9s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:52.749682  146734 type.go:165] "Request Body" body=""
	I1222 22:51:52.749751  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:52.750065  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:53.249705  146734 type.go:165] "Request Body" body=""
	I1222 22:51:53.249792  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:53.250136  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:53.250202  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:53.749743  146734 type.go:165] "Request Body" body=""
	I1222 22:51:53.749827  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:53.750165  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:54.249818  146734 type.go:165] "Request Body" body=""
	I1222 22:51:54.249922  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:54.250320  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:54.749879  146734 type.go:165] "Request Body" body=""
	I1222 22:51:54.749983  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:54.750325  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:55.249732  146734 type.go:165] "Request Body" body=""
	I1222 22:51:55.249811  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:55.250186  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:55.250256  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:55.749720  146734 type.go:165] "Request Body" body=""
	I1222 22:51:55.749797  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:55.750101  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:56.249700  146734 type.go:165] "Request Body" body=""
	I1222 22:51:56.249777  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:56.250119  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:56.630698  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:51:56.682682  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:56.682728  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:56.682750  146734 retry.go:84] will retry after 19.1s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:56.749962  146734 type.go:165] "Request Body" body=""
	I1222 22:51:56.750043  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:56.750390  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:57.249782  146734 type.go:165] "Request Body" body=""
	I1222 22:51:57.249859  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:57.250169  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:57.750032  146734 type.go:165] "Request Body" body=""
	I1222 22:51:57.750112  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:57.750459  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:57.750526  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:58.250052  146734 type.go:165] "Request Body" body=""
	I1222 22:51:58.250129  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:58.250484  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:58.750074  146734 type.go:165] "Request Body" body=""
	I1222 22:51:58.750164  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:58.750559  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:59.250376  146734 type.go:165] "Request Body" body=""
	I1222 22:51:59.250455  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:59.250821  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:59.750547  146734 type.go:165] "Request Body" body=""
	I1222 22:51:59.750668  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:59.751053  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:59.751124  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:00.249679  146734 type.go:165] "Request Body" body=""
	I1222 22:52:00.249756  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:00.250124  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:00.749766  146734 type.go:165] "Request Body" body=""
	I1222 22:52:00.749870  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:00.750200  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:01.249805  146734 type.go:165] "Request Body" body=""
	I1222 22:52:01.249878  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:01.250214  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:01.555677  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:52:01.608817  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:52:01.608873  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:52:01.608898  146734 retry.go:84] will retry after 11.1s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:52:01.750139  146734 type.go:165] "Request Body" body=""
	I1222 22:52:01.750232  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:01.750541  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:02.250350  146734 type.go:165] "Request Body" body=""
	I1222 22:52:02.250446  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:02.250884  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:02.250959  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:02.749991  146734 type.go:165] "Request Body" body=""
	I1222 22:52:02.750087  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:02.750489  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:03.249785  146734 type.go:165] "Request Body" body=""
	I1222 22:52:03.249886  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:03.250222  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:03.749863  146734 type.go:165] "Request Body" body=""
	I1222 22:52:03.749953  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:03.750330  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:04.249910  146734 type.go:165] "Request Body" body=""
	I1222 22:52:04.249991  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:04.250323  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:04.749787  146734 type.go:165] "Request Body" body=""
	I1222 22:52:04.749878  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:04.750255  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:04.750328  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:05.249805  146734 type.go:165] "Request Body" body=""
	I1222 22:52:05.249881  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:05.250215  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:05.749829  146734 type.go:165] "Request Body" body=""
	I1222 22:52:05.749905  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:05.750236  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:06.249768  146734 type.go:165] "Request Body" body=""
	I1222 22:52:06.249853  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:06.250166  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:06.749813  146734 type.go:165] "Request Body" body=""
	I1222 22:52:06.749962  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:06.750292  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:06.750353  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:07.249913  146734 type.go:165] "Request Body" body=""
	I1222 22:52:07.249997  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:07.250350  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:07.750157  146734 type.go:165] "Request Body" body=""
	I1222 22:52:07.750249  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:07.750625  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:08.250269  146734 type.go:165] "Request Body" body=""
	I1222 22:52:08.250349  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:08.250699  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:08.750338  146734 type.go:165] "Request Body" body=""
	I1222 22:52:08.750417  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:08.750817  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:08.750880  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:09.250447  146734 type.go:165] "Request Body" body=""
	I1222 22:52:09.250517  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:09.250886  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:09.750542  146734 type.go:165] "Request Body" body=""
	I1222 22:52:09.750651  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:09.751017  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:10.249569  146734 type.go:165] "Request Body" body=""
	I1222 22:52:10.249667  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:10.250007  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:10.749614  146734 type.go:165] "Request Body" body=""
	I1222 22:52:10.749698  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:10.749986  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:11.249644  146734 type.go:165] "Request Body" body=""
	I1222 22:52:11.249721  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:11.250050  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:11.250115  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:11.749702  146734 type.go:165] "Request Body" body=""
	I1222 22:52:11.749781  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:11.750159  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:12.250570  146734 type.go:165] "Request Body" body=""
	I1222 22:52:12.250676  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:12.251019  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:12.749204  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:52:12.749953  146734 type.go:165] "Request Body" body=""
	I1222 22:52:12.750037  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:12.750364  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:12.803295  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:52:12.803361  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:52:12.803388  146734 retry.go:84] will retry after 41s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:52:13.249864  146734 type.go:165] "Request Body" body=""
	I1222 22:52:13.249961  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:13.250341  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:13.250413  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:13.749947  146734 type.go:165] "Request Body" body=""
	I1222 22:52:13.750050  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:13.750385  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:14.249969  146734 type.go:165] "Request Body" body=""
	I1222 22:52:14.250047  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:14.250429  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:14.750035  146734 type.go:165] "Request Body" body=""
	I1222 22:52:14.750108  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:14.750442  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:15.249717  146734 type.go:165] "Request Body" body=""
	I1222 22:52:15.249827  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:15.250182  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:15.749736  146734 type.go:165] "Request Body" body=""
	I1222 22:52:15.749817  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:15.750176  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:15.750272  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:15.781356  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:52:15.834579  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:52:15.834644  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:52:15.834678  146734 retry.go:84] will retry after 22s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:52:16.250185  146734 type.go:165] "Request Body" body=""
	I1222 22:52:16.250285  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:16.250641  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:16.750300  146734 type.go:165] "Request Body" body=""
	I1222 22:52:16.750391  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:16.750749  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:17.250375  146734 type.go:165] "Request Body" body=""
	I1222 22:52:17.250470  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:17.250796  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:17.750663  146734 type.go:165] "Request Body" body=""
	I1222 22:52:17.750770  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:17.751155  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:17.751219  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:18.249705  146734 type.go:165] "Request Body" body=""
	I1222 22:52:18.249774  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:18.250099  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:18.749719  146734 type.go:165] "Request Body" body=""
	I1222 22:52:18.749823  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:18.750127  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:19.249772  146734 type.go:165] "Request Body" body=""
	I1222 22:52:19.249846  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:19.250172  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:19.749726  146734 type.go:165] "Request Body" body=""
	I1222 22:52:19.749805  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:19.750128  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:20.249691  146734 type.go:165] "Request Body" body=""
	I1222 22:52:20.249767  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:20.250123  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:20.250193  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:20.749739  146734 type.go:165] "Request Body" body=""
	I1222 22:52:20.749820  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:20.750153  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:21.249730  146734 type.go:165] "Request Body" body=""
	I1222 22:52:21.249821  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:21.250178  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:21.749804  146734 type.go:165] "Request Body" body=""
	I1222 22:52:21.749892  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:21.750232  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:22.249806  146734 type.go:165] "Request Body" body=""
	I1222 22:52:22.249886  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:22.250237  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:22.250298  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:22.750142  146734 type.go:165] "Request Body" body=""
	I1222 22:52:22.750215  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:22.750516  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:23.250222  146734 type.go:165] "Request Body" body=""
	I1222 22:52:23.250332  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:23.250708  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:23.750438  146734 type.go:165] "Request Body" body=""
	I1222 22:52:23.750532  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:23.750972  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:24.249568  146734 type.go:165] "Request Body" body=""
	I1222 22:52:24.249660  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:24.249969  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:24.749566  146734 type.go:165] "Request Body" body=""
	I1222 22:52:24.749670  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:24.750007  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:24.750078  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:25.249560  146734 type.go:165] "Request Body" body=""
	I1222 22:52:25.249668  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:25.250009  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:25.749615  146734 type.go:165] "Request Body" body=""
	I1222 22:52:25.749711  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:25.750086  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:26.249720  146734 type.go:165] "Request Body" body=""
	I1222 22:52:26.249804  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:26.250176  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:26.749791  146734 type.go:165] "Request Body" body=""
	I1222 22:52:26.749896  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:26.750250  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:26.750329  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:27.249744  146734 type.go:165] "Request Body" body=""
	I1222 22:52:27.249822  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:27.250119  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:27.749959  146734 type.go:165] "Request Body" body=""
	I1222 22:52:27.750049  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:27.750408  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:28.249981  146734 type.go:165] "Request Body" body=""
	I1222 22:52:28.250077  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:28.250414  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:28.749755  146734 type.go:165] "Request Body" body=""
	I1222 22:52:28.749827  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:28.750148  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:29.249816  146734 type.go:165] "Request Body" body=""
	I1222 22:52:29.249914  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:29.250248  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:29.250329  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:29.749820  146734 type.go:165] "Request Body" body=""
	I1222 22:52:29.749892  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:29.750206  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:30.249734  146734 type.go:165] "Request Body" body=""
	I1222 22:52:30.249842  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:30.250163  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:30.749684  146734 type.go:165] "Request Body" body=""
	I1222 22:52:30.749763  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:30.750086  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:31.249714  146734 type.go:165] "Request Body" body=""
	I1222 22:52:31.249787  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:31.250100  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:31.749725  146734 type.go:165] "Request Body" body=""
	I1222 22:52:31.749812  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:31.750152  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:31.750223  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:32.249759  146734 type.go:165] "Request Body" body=""
	I1222 22:52:32.249872  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:32.250213  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:32.750284  146734 type.go:165] "Request Body" body=""
	I1222 22:52:32.750382  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:32.750808  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:33.250484  146734 type.go:165] "Request Body" body=""
	I1222 22:52:33.250553  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:33.250856  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:33.750582  146734 type.go:165] "Request Body" body=""
	I1222 22:52:33.750682  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:33.751084  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:33.751151  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:34.249678  146734 type.go:165] "Request Body" body=""
	I1222 22:52:34.249771  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:34.250118  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:34.749750  146734 type.go:165] "Request Body" body=""
	I1222 22:52:34.749834  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:34.750178  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:35.249794  146734 type.go:165] "Request Body" body=""
	I1222 22:52:35.249866  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:35.250195  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:35.749858  146734 type.go:165] "Request Body" body=""
	I1222 22:52:35.749938  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:35.750250  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:36.249945  146734 type.go:165] "Request Body" body=""
	I1222 22:52:36.250030  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:36.250372  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:36.250452  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:36.750049  146734 type.go:165] "Request Body" body=""
	I1222 22:52:36.750122  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:36.750536  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:37.250238  146734 type.go:165] "Request Body" body=""
	I1222 22:52:37.250338  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:37.250710  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:37.750607  146734 type.go:165] "Request Body" body=""
	I1222 22:52:37.750693  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:37.751037  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:37.791261  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:52:37.841791  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:52:37.841848  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:52:37.841882  146734 retry.go:84] will retry after 24.9s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:52:38.250412  146734 type.go:165] "Request Body" body=""
	I1222 22:52:38.250501  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:38.250856  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:38.250927  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:38.750539  146734 type.go:165] "Request Body" body=""
	I1222 22:52:38.750640  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:38.750989  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:39.250534  146734 type.go:165] "Request Body" body=""
	I1222 22:52:39.250619  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:39.250903  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:39.750622  146734 type.go:165] "Request Body" body=""
	I1222 22:52:39.750769  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:39.751121  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:40.249717  146734 type.go:165] "Request Body" body=""
	I1222 22:52:40.249817  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:40.250172  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:40.749725  146734 type.go:165] "Request Body" body=""
	I1222 22:52:40.749800  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:40.750112  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:40.750183  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:41.249733  146734 type.go:165] "Request Body" body=""
	I1222 22:52:41.249816  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:41.250159  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:41.749758  146734 type.go:165] "Request Body" body=""
	I1222 22:52:41.749830  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:41.750172  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:42.249740  146734 type.go:165] "Request Body" body=""
	I1222 22:52:42.249820  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:42.250217  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:42.750262  146734 type.go:165] "Request Body" body=""
	I1222 22:52:42.750373  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:42.750720  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:42.750785  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:43.250407  146734 type.go:165] "Request Body" body=""
	I1222 22:52:43.250502  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:43.250877  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:43.750507  146734 type.go:165] "Request Body" body=""
	I1222 22:52:43.750607  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:43.750955  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:44.250608  146734 type.go:165] "Request Body" body=""
	I1222 22:52:44.250729  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:44.251071  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:44.749661  146734 type.go:165] "Request Body" body=""
	I1222 22:52:44.749764  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:44.750070  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:45.249668  146734 type.go:165] "Request Body" body=""
	I1222 22:52:45.249750  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:45.250041  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:45.250109  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:45.749753  146734 type.go:165] "Request Body" body=""
	I1222 22:52:45.749827  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:45.750181  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:46.249757  146734 type.go:165] "Request Body" body=""
	I1222 22:52:46.249830  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:46.250177  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:46.749749  146734 type.go:165] "Request Body" body=""
	I1222 22:52:46.749824  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:46.750208  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:47.249770  146734 type.go:165] "Request Body" body=""
	I1222 22:52:47.249839  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:47.250180  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:47.250253  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:47.750010  146734 type.go:165] "Request Body" body=""
	I1222 22:52:47.750110  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:47.750420  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:48.249698  146734 type.go:165] "Request Body" body=""
	I1222 22:52:48.249771  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:48.250079  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:48.750048  146734 type.go:165] "Request Body" body=""
	I1222 22:52:48.750143  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:48.750506  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:49.250090  146734 type.go:165] "Request Body" body=""
	I1222 22:52:49.250180  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:49.250514  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:49.250621  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:49.750101  146734 type.go:165] "Request Body" body=""
	I1222 22:52:49.750203  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:49.750541  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:50.250324  146734 type.go:165] "Request Body" body=""
	I1222 22:52:50.250405  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:50.250760  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:50.750452  146734 type.go:165] "Request Body" body=""
	I1222 22:52:50.750541  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:50.750912  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:51.249606  146734 type.go:165] "Request Body" body=""
	I1222 22:52:51.249697  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:51.250034  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:51.749707  146734 type.go:165] "Request Body" body=""
	I1222 22:52:51.749808  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:51.750156  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:51.750217  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:52.249714  146734 type.go:165] "Request Body" body=""
	I1222 22:52:52.249790  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:52.250101  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:52.750121  146734 type.go:165] "Request Body" body=""
	I1222 22:52:52.750209  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:52.750561  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:53.250207  146734 type.go:165] "Request Body" body=""
	I1222 22:52:53.250279  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:53.250649  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:53.750285  146734 type.go:165] "Request Body" body=""
	I1222 22:52:53.750382  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:53.750757  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:53.750818  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:53.825965  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:52:53.875648  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:52:53.878317  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:52:53.878441  146734 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1222 22:52:54.249881  146734 type.go:165] "Request Body" body=""
	I1222 22:52:54.249965  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:54.250291  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:54.749880  146734 type.go:165] "Request Body" body=""
	I1222 22:52:54.749992  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:54.750339  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:55.249961  146734 type.go:165] "Request Body" body=""
	I1222 22:52:55.250051  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:55.250408  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:55.750006  146734 type.go:165] "Request Body" body=""
	I1222 22:52:55.750102  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:55.750525  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:56.250142  146734 type.go:165] "Request Body" body=""
	I1222 22:52:56.250214  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:56.250523  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:56.250588  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:56.750239  146734 type.go:165] "Request Body" body=""
	I1222 22:52:56.750323  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:56.750714  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:57.250354  146734 type.go:165] "Request Body" body=""
	I1222 22:52:57.250424  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:57.250804  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:57.750628  146734 type.go:165] "Request Body" body=""
	I1222 22:52:57.750717  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:57.751065  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:58.249685  146734 type.go:165] "Request Body" body=""
	I1222 22:52:58.249757  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:58.250088  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:58.749668  146734 type.go:165] "Request Body" body=""
	I1222 22:52:58.749748  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:58.750061  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:58.750137  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:59.249726  146734 type.go:165] "Request Body" body=""
	I1222 22:52:59.249899  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:59.250271  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:59.749844  146734 type.go:165] "Request Body" body=""
	I1222 22:52:59.749922  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:59.750248  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:00.249741  146734 type.go:165] "Request Body" body=""
	I1222 22:53:00.249825  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:00.250192  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:00.749724  146734 type.go:165] "Request Body" body=""
	I1222 22:53:00.749826  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:00.750162  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:00.750221  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:01.249640  146734 type.go:165] "Request Body" body=""
	I1222 22:53:01.249734  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:01.250068  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:01.749640  146734 type.go:165] "Request Body" body=""
	I1222 22:53:01.749713  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:01.749993  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:02.249646  146734 type.go:165] "Request Body" body=""
	I1222 22:53:02.249726  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:02.250075  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:02.750082  146734 type.go:165] "Request Body" body=""
	I1222 22:53:02.750162  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:02.750495  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:02.750554  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:02.761644  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:53:02.811523  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:53:02.814145  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:53:02.814242  146734 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1222 22:53:02.815929  146734 out.go:179] * Enabled addons: 
	I1222 22:53:02.817068  146734 addons.go:530] duration metric: took 1m40.193946362s for enable addons: enabled=[]
	I1222 22:53:03.249698  146734 type.go:165] "Request Body" body=""
	I1222 22:53:03.249788  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:03.250104  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:03.749742  146734 type.go:165] "Request Body" body=""
	I1222 22:53:03.749825  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:03.750182  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:04.249738  146734 type.go:165] "Request Body" body=""
	I1222 22:53:04.249809  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:04.250142  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:04.749826  146734 type.go:165] "Request Body" body=""
	I1222 22:53:04.749903  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:04.750163  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:05.249723  146734 type.go:165] "Request Body" body=""
	I1222 22:53:05.249795  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:05.250136  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:05.250198  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:05.749730  146734 type.go:165] "Request Body" body=""
	I1222 22:53:05.749806  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:05.750146  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:06.249727  146734 type.go:165] "Request Body" body=""
	I1222 22:53:06.249793  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:06.250084  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:06.749693  146734 type.go:165] "Request Body" body=""
	I1222 22:53:06.749808  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:06.750149  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:07.249718  146734 type.go:165] "Request Body" body=""
	I1222 22:53:07.249825  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:07.250157  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:07.250228  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:07.749972  146734 type.go:165] "Request Body" body=""
	I1222 22:53:07.750046  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:07.750368  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:08.249951  146734 type.go:165] "Request Body" body=""
	I1222 22:53:08.250025  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:08.250370  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:08.750009  146734 type.go:165] "Request Body" body=""
	I1222 22:53:08.750097  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:08.750447  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:09.250029  146734 type.go:165] "Request Body" body=""
	I1222 22:53:09.250111  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:09.250445  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:09.250517  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:09.750056  146734 type.go:165] "Request Body" body=""
	I1222 22:53:09.750146  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:09.750490  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:10.250067  146734 type.go:165] "Request Body" body=""
	I1222 22:53:10.250142  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:10.250503  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:10.749730  146734 type.go:165] "Request Body" body=""
	I1222 22:53:10.749826  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:10.750162  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:11.249711  146734 type.go:165] "Request Body" body=""
	I1222 22:53:11.249778  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:11.250102  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:11.749714  146734 type.go:165] "Request Body" body=""
	I1222 22:53:11.749820  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:11.750162  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:11.750229  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:12.249740  146734 type.go:165] "Request Body" body=""
	I1222 22:53:12.249829  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:12.250202  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:12.750297  146734 type.go:165] "Request Body" body=""
	I1222 22:53:12.750400  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:12.750823  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:13.250490  146734 type.go:165] "Request Body" body=""
	I1222 22:53:13.250610  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:13.250943  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:13.750581  146734 type.go:165] "Request Body" body=""
	I1222 22:53:13.750675  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:13.751030  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:13.751101  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:14.249749  146734 type.go:165] "Request Body" body=""
	I1222 22:53:14.249848  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:14.250187  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:14.749731  146734 type.go:165] "Request Body" body=""
	I1222 22:53:14.749805  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:14.750129  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:15.249670  146734 type.go:165] "Request Body" body=""
	I1222 22:53:15.249753  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:15.250047  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:15.749620  146734 type.go:165] "Request Body" body=""
	I1222 22:53:15.749692  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:15.750012  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:16.249630  146734 type.go:165] "Request Body" body=""
	I1222 22:53:16.249712  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:16.250033  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:16.250099  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:16.749569  146734 type.go:165] "Request Body" body=""
	I1222 22:53:16.749652  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:16.750014  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:17.249620  146734 type.go:165] "Request Body" body=""
	I1222 22:53:17.249709  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:17.250072  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:17.749929  146734 type.go:165] "Request Body" body=""
	I1222 22:53:17.750004  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:17.750365  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:18.249926  146734 type.go:165] "Request Body" body=""
	I1222 22:53:18.250000  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:18.250371  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:18.250470  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:18.749909  146734 type.go:165] "Request Body" body=""
	I1222 22:53:18.749982  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:18.750366  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:19.249922  146734 type.go:165] "Request Body" body=""
	I1222 22:53:19.250017  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:19.250357  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:19.750009  146734 type.go:165] "Request Body" body=""
	I1222 22:53:19.750311  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:19.750720  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:20.249710  146734 type.go:165] "Request Body" body=""
	I1222 22:53:20.249792  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:20.250138  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:20.749729  146734 type.go:165] "Request Body" body=""
	I1222 22:53:20.749802  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:20.750122  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:20.750181  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:21.249727  146734 type.go:165] "Request Body" body=""
	I1222 22:53:21.249810  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:21.250104  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:21.749698  146734 type.go:165] "Request Body" body=""
	I1222 22:53:21.749771  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:21.750100  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:22.249641  146734 type.go:165] "Request Body" body=""
	I1222 22:53:22.249720  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:22.250039  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:22.750075  146734 type.go:165] "Request Body" body=""
	I1222 22:53:22.750156  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:22.750471  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:22.750535  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:23.250068  146734 type.go:165] "Request Body" body=""
	I1222 22:53:23.250145  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:23.250478  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:23.750036  146734 type.go:165] "Request Body" body=""
	I1222 22:53:23.750110  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:23.750437  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:24.250017  146734 type.go:165] "Request Body" body=""
	I1222 22:53:24.250093  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:24.250476  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:24.750180  146734 type.go:165] "Request Body" body=""
	I1222 22:53:24.750257  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:24.750588  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:24.750677  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:25.250225  146734 type.go:165] "Request Body" body=""
	I1222 22:53:25.250324  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:25.250676  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:25.749787  146734 type.go:165] "Request Body" body=""
	I1222 22:53:25.749860  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:25.750164  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:26.249730  146734 type.go:165] "Request Body" body=""
	I1222 22:53:26.249811  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:26.250140  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:26.749761  146734 type.go:165] "Request Body" body=""
	I1222 22:53:26.749838  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:26.750123  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:27.249731  146734 type.go:165] "Request Body" body=""
	I1222 22:53:27.249803  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:27.250133  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:27.250212  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:27.749969  146734 type.go:165] "Request Body" body=""
	I1222 22:53:27.750075  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:27.750451  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:28.250072  146734 type.go:165] "Request Body" body=""
	I1222 22:53:28.250148  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:28.250489  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:28.749678  146734 type.go:165] "Request Body" body=""
	I1222 22:53:28.749774  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:28.750036  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:29.249723  146734 type.go:165] "Request Body" body=""
	I1222 22:53:29.249804  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:29.250123  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:29.749729  146734 type.go:165] "Request Body" body=""
	I1222 22:53:29.749802  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:29.750160  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:29.750223  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:30.249702  146734 type.go:165] "Request Body" body=""
	I1222 22:53:30.249782  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:30.250134  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:30.749711  146734 type.go:165] "Request Body" body=""
	I1222 22:53:30.749794  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:30.750154  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:31.249754  146734 type.go:165] "Request Body" body=""
	I1222 22:53:31.249836  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:31.250160  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:31.749713  146734 type.go:165] "Request Body" body=""
	I1222 22:53:31.749788  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:31.750082  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:32.249744  146734 type.go:165] "Request Body" body=""
	I1222 22:53:32.249825  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:32.250166  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:32.250234  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:32.750188  146734 type.go:165] "Request Body" body=""
	I1222 22:53:32.750294  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:32.750695  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:33.250318  146734 type.go:165] "Request Body" body=""
	I1222 22:53:33.250417  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:33.250846  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:33.750507  146734 type.go:165] "Request Body" body=""
	I1222 22:53:33.750613  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:33.750990  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:34.249623  146734 type.go:165] "Request Body" body=""
	I1222 22:53:34.249700  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:34.250041  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:34.749622  146734 type.go:165] "Request Body" body=""
	I1222 22:53:34.749696  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:34.749991  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:34.750050  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:35.249689  146734 type.go:165] "Request Body" body=""
	I1222 22:53:35.249763  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:35.250117  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:35.749695  146734 type.go:165] "Request Body" body=""
	I1222 22:53:35.749776  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:35.750097  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:36.249798  146734 type.go:165] "Request Body" body=""
	I1222 22:53:36.249903  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:36.250277  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:36.749846  146734 type.go:165] "Request Body" body=""
	I1222 22:53:36.749925  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:36.750288  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:36.750366  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:37.249900  146734 type.go:165] "Request Body" body=""
	I1222 22:53:37.249980  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:37.250334  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:37.750193  146734 type.go:165] "Request Body" body=""
	I1222 22:53:37.750266  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:37.750582  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:38.250318  146734 type.go:165] "Request Body" body=""
	I1222 22:53:38.250402  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:38.250780  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:38.750488  146734 type.go:165] "Request Body" body=""
	I1222 22:53:38.750565  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:38.750945  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:38.751009  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:39.250555  146734 type.go:165] "Request Body" body=""
	I1222 22:53:39.250638  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:39.250958  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:39.749568  146734 type.go:165] "Request Body" body=""
	I1222 22:53:39.749673  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:39.750024  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:40.249671  146734 type.go:165] "Request Body" body=""
	I1222 22:53:40.249757  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:40.250102  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:40.749680  146734 type.go:165] "Request Body" body=""
	I1222 22:53:40.749755  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:40.750071  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:41.249740  146734 type.go:165] "Request Body" body=""
	I1222 22:53:41.249827  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:41.250168  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:41.250233  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:41.749722  146734 type.go:165] "Request Body" body=""
	I1222 22:53:41.749804  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:41.750139  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:42.249731  146734 type.go:165] "Request Body" body=""
	I1222 22:53:42.249824  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:42.250150  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:42.750207  146734 type.go:165] "Request Body" body=""
	I1222 22:53:42.750296  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:42.750623  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:43.250305  146734 type.go:165] "Request Body" body=""
	I1222 22:53:43.250405  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:43.250821  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:43.250898  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:43.750513  146734 type.go:165] "Request Body" body=""
	I1222 22:53:43.750619  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:43.750993  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:44.249564  146734 type.go:165] "Request Body" body=""
	I1222 22:53:44.249700  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:44.250049  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:44.749717  146734 type.go:165] "Request Body" body=""
	I1222 22:53:44.749793  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:44.750110  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:45.249673  146734 type.go:165] "Request Body" body=""
	I1222 22:53:45.249765  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:45.250110  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:45.749727  146734 type.go:165] "Request Body" body=""
	I1222 22:53:45.749834  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:45.750168  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:45.750236  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:46.249739  146734 type.go:165] "Request Body" body=""
	I1222 22:53:46.249823  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:46.250174  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:46.749761  146734 type.go:165] "Request Body" body=""
	I1222 22:53:46.749836  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:46.750164  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:47.249735  146734 type.go:165] "Request Body" body=""
	I1222 22:53:47.249821  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:47.250177  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:47.749948  146734 type.go:165] "Request Body" body=""
	I1222 22:53:47.750043  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:47.750397  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:47.750471  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:48.249836  146734 type.go:165] "Request Body" body=""
	I1222 22:53:48.249915  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:48.250168  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:48.749809  146734 type.go:165] "Request Body" body=""
	I1222 22:53:48.749888  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:48.750240  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:49.249818  146734 type.go:165] "Request Body" body=""
	I1222 22:53:49.249899  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:49.250266  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:49.749713  146734 type.go:165] "Request Body" body=""
	I1222 22:53:49.749790  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:49.750104  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:50.249693  146734 type.go:165] "Request Body" body=""
	I1222 22:53:50.249771  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:50.250117  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:50.250187  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:50.749730  146734 type.go:165] "Request Body" body=""
	I1222 22:53:50.749812  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:50.750159  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:51.249780  146734 type.go:165] "Request Body" body=""
	I1222 22:53:51.249878  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:51.250247  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:51.749718  146734 type.go:165] "Request Body" body=""
	I1222 22:53:51.749796  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:51.750134  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:52.249742  146734 type.go:165] "Request Body" body=""
	I1222 22:53:52.249827  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:52.250156  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:52.250222  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:52.750193  146734 type.go:165] "Request Body" body=""
	I1222 22:53:52.750262  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:52.750631  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:53.250282  146734 type.go:165] "Request Body" body=""
	I1222 22:53:53.250365  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:53.250710  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:53.750364  146734 type.go:165] "Request Body" body=""
	I1222 22:53:53.750466  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:53.750806  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:54.250614  146734 type.go:165] "Request Body" body=""
	I1222 22:53:54.250720  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:54.251091  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:54.251160  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:54.749719  146734 type.go:165] "Request Body" body=""
	I1222 22:53:54.749797  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:54.750115  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:55.249723  146734 type.go:165] "Request Body" body=""
	I1222 22:53:55.249798  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:55.250115  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:55.749721  146734 type.go:165] "Request Body" body=""
	I1222 22:53:55.749813  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:55.750120  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:56.249788  146734 type.go:165] "Request Body" body=""
	I1222 22:53:56.249864  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:56.250194  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:56.749823  146734 type.go:165] "Request Body" body=""
	I1222 22:53:56.749924  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:56.750260  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:56.750323  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:57.249824  146734 type.go:165] "Request Body" body=""
	I1222 22:53:57.249911  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:57.250258  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:57.750028  146734 type.go:165] "Request Body" body=""
	I1222 22:53:57.750101  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:57.750434  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:58.250067  146734 type.go:165] "Request Body" body=""
	I1222 22:53:58.250165  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:58.250680  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:58.750319  146734 type.go:165] "Request Body" body=""
	I1222 22:53:58.750397  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:58.750751  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:58.750816  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:59.250396  146734 type.go:165] "Request Body" body=""
	I1222 22:53:59.250469  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:59.250838  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:59.750501  146734 type.go:165] "Request Body" body=""
	I1222 22:53:59.750579  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:59.750961  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:00.250615  146734 type.go:165] "Request Body" body=""
	I1222 22:54:00.250698  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:00.251022  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:00.749626  146734 type.go:165] "Request Body" body=""
	I1222 22:54:00.749712  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:00.750067  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:01.249664  146734 type.go:165] "Request Body" body=""
	I1222 22:54:01.249759  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:01.250103  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:01.250170  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:01.749661  146734 type.go:165] "Request Body" body=""
	I1222 22:54:01.749759  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:01.750063  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:02.249721  146734 type.go:165] "Request Body" body=""
	I1222 22:54:02.249806  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:02.250139  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:02.750137  146734 type.go:165] "Request Body" body=""
	I1222 22:54:02.750239  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:02.750626  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:03.249783  146734 type.go:165] "Request Body" body=""
	I1222 22:54:03.249870  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:03.250198  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:03.250261  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:03.749859  146734 type.go:165] "Request Body" body=""
	I1222 22:54:03.749942  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:03.750266  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:04.249837  146734 type.go:165] "Request Body" body=""
	I1222 22:54:04.249924  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:04.250252  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:04.749814  146734 type.go:165] "Request Body" body=""
	I1222 22:54:04.749888  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:04.750218  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:05.249838  146734 type.go:165] "Request Body" body=""
	I1222 22:54:05.249920  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:05.250251  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:05.250311  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:05.749869  146734 type.go:165] "Request Body" body=""
	I1222 22:54:05.749946  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:05.750283  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:06.249832  146734 type.go:165] "Request Body" body=""
	I1222 22:54:06.249906  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:06.250221  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:06.749788  146734 type.go:165] "Request Body" body=""
	I1222 22:54:06.749871  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:06.750209  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:07.249776  146734 type.go:165] "Request Body" body=""
	I1222 22:54:07.249854  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:07.250183  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:07.749835  146734 type.go:165] "Request Body" body=""
	I1222 22:54:07.749920  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:07.750202  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:07.750265  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:08.249824  146734 type.go:165] "Request Body" body=""
	I1222 22:54:08.249910  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:08.250234  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:08.749844  146734 type.go:165] "Request Body" body=""
	I1222 22:54:08.749966  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:08.750291  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:09.249871  146734 type.go:165] "Request Body" body=""
	I1222 22:54:09.249975  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:09.250275  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:09.749927  146734 type.go:165] "Request Body" body=""
	I1222 22:54:09.750002  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:09.750347  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:09.750418  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:10.249886  146734 type.go:165] "Request Body" body=""
	I1222 22:54:10.249966  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:10.250298  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:10.749734  146734 type.go:165] "Request Body" body=""
	I1222 22:54:10.749817  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:10.750166  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:11.249744  146734 type.go:165] "Request Body" body=""
	I1222 22:54:11.249820  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:11.250144  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:11.749770  146734 type.go:165] "Request Body" body=""
	I1222 22:54:11.749862  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:11.750201  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:12.249794  146734 type.go:165] "Request Body" body=""
	I1222 22:54:12.249873  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:12.250162  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:12.250235  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:12.750120  146734 type.go:165] "Request Body" body=""
	I1222 22:54:12.750215  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:12.750539  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:13.250225  146734 type.go:165] "Request Body" body=""
	I1222 22:54:13.250297  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:13.250634  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:13.750359  146734 type.go:165] "Request Body" body=""
	I1222 22:54:13.750457  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:13.750827  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:14.250483  146734 type.go:165] "Request Body" body=""
	I1222 22:54:14.250556  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:14.250898  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:14.250968  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:14.750562  146734 type.go:165] "Request Body" body=""
	I1222 22:54:14.750659  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:14.750987  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:15.250672  146734 type.go:165] "Request Body" body=""
	I1222 22:54:15.250773  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:15.251113  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:15.749701  146734 type.go:165] "Request Body" body=""
	I1222 22:54:15.749784  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:15.750120  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:16.249715  146734 type.go:165] "Request Body" body=""
	I1222 22:54:16.249790  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:16.250112  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:16.749747  146734 type.go:165] "Request Body" body=""
	I1222 22:54:16.749839  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:16.750218  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:16.750293  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:17.249782  146734 type.go:165] "Request Body" body=""
	I1222 22:54:17.249860  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:17.250177  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:17.750000  146734 type.go:165] "Request Body" body=""
	I1222 22:54:17.750088  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:17.750446  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:18.249735  146734 type.go:165] "Request Body" body=""
	I1222 22:54:18.249825  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:18.250179  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:18.749732  146734 type.go:165] "Request Body" body=""
	I1222 22:54:18.749816  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:18.750154  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:19.249741  146734 type.go:165] "Request Body" body=""
	I1222 22:54:19.249830  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:19.250196  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:19.250264  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:19.749730  146734 type.go:165] "Request Body" body=""
	I1222 22:54:19.749819  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:19.750126  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:20.249707  146734 type.go:165] "Request Body" body=""
	I1222 22:54:20.249785  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:20.250091  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:20.749724  146734 type.go:165] "Request Body" body=""
	I1222 22:54:20.749813  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:20.750150  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:21.249721  146734 type.go:165] "Request Body" body=""
	I1222 22:54:21.249797  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:21.250083  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:21.749694  146734 type.go:165] "Request Body" body=""
	I1222 22:54:21.749796  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:21.750142  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:21.750217  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:22.249754  146734 type.go:165] "Request Body" body=""
	I1222 22:54:22.249830  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:22.250160  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:22.750118  146734 type.go:165] "Request Body" body=""
	I1222 22:54:22.750196  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:22.750523  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:23.250322  146734 type.go:165] "Request Body" body=""
	I1222 22:54:23.250400  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:23.250767  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:23.750404  146734 type.go:165] "Request Body" body=""
	I1222 22:54:23.750488  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:23.750857  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:23.750933  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:24.249571  146734 type.go:165] "Request Body" body=""
	I1222 22:54:24.249681  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:24.250051  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:24.749643  146734 type.go:165] "Request Body" body=""
	I1222 22:54:24.749721  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:24.750066  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:25.249682  146734 type.go:165] "Request Body" body=""
	I1222 22:54:25.249768  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:25.250100  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:25.749662  146734 type.go:165] "Request Body" body=""
	I1222 22:54:25.749739  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:25.750065  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:26.249723  146734 type.go:165] "Request Body" body=""
	I1222 22:54:26.249806  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:26.250109  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:26.250210  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:26.749706  146734 type.go:165] "Request Body" body=""
	I1222 22:54:26.749790  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:26.750145  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:27.249715  146734 type.go:165] "Request Body" body=""
	I1222 22:54:27.249788  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:27.250030  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:27.749914  146734 type.go:165] "Request Body" body=""
	I1222 22:54:27.749988  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:27.750304  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:28.249902  146734 type.go:165] "Request Body" body=""
	I1222 22:54:28.249990  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:28.250337  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:28.250411  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:28.749874  146734 type.go:165] "Request Body" body=""
	I1222 22:54:28.749948  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:28.750244  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:29.249918  146734 type.go:165] "Request Body" body=""
	I1222 22:54:29.249996  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:29.250346  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:29.749881  146734 type.go:165] "Request Body" body=""
	I1222 22:54:29.749960  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:29.750287  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:30.249721  146734 type.go:165] "Request Body" body=""
	I1222 22:54:30.249800  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:30.250101  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:30.749693  146734 type.go:165] "Request Body" body=""
	I1222 22:54:30.749779  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:30.750112  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:30.750186  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:31.249759  146734 type.go:165] "Request Body" body=""
	I1222 22:54:31.249844  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:31.250182  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:31.749729  146734 type.go:165] "Request Body" body=""
	I1222 22:54:31.749814  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:31.750130  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:32.249782  146734 type.go:165] "Request Body" body=""
	I1222 22:54:32.249862  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:32.250186  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:32.750121  146734 type.go:165] "Request Body" body=""
	I1222 22:54:32.750207  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:32.750542  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:32.750634  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:33.249784  146734 type.go:165] "Request Body" body=""
	I1222 22:54:33.249855  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:33.250171  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:33.749810  146734 type.go:165] "Request Body" body=""
	I1222 22:54:33.749888  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:33.750215  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:34.249774  146734 type.go:165] "Request Body" body=""
	I1222 22:54:34.249851  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:34.250176  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:34.749708  146734 type.go:165] "Request Body" body=""
	I1222 22:54:34.749777  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:34.750090  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:35.249688  146734 type.go:165] "Request Body" body=""
	I1222 22:54:35.249766  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:35.250089  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:35.250148  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:35.749687  146734 type.go:165] "Request Body" body=""
	I1222 22:54:35.749759  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:35.750079  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:36.249639  146734 type.go:165] "Request Body" body=""
	I1222 22:54:36.249710  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:36.250032  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:36.749672  146734 type.go:165] "Request Body" body=""
	I1222 22:54:36.749746  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:36.750070  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:37.249695  146734 type.go:165] "Request Body" body=""
	I1222 22:54:37.249776  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:37.250115  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:37.250177  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:37.750002  146734 type.go:165] "Request Body" body=""
	I1222 22:54:37.750095  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:37.750456  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:38.250050  146734 type.go:165] "Request Body" body=""
	I1222 22:54:38.250125  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:38.250452  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:38.749992  146734 type.go:165] "Request Body" body=""
	I1222 22:54:38.750073  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:38.750425  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:39.249707  146734 type.go:165] "Request Body" body=""
	I1222 22:54:39.249774  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:39.250018  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:39.749673  146734 type.go:165] "Request Body" body=""
	I1222 22:54:39.749773  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:39.750117  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:39.750178  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:40.249716  146734 type.go:165] "Request Body" body=""
	I1222 22:54:40.249789  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:40.250105  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:40.749704  146734 type.go:165] "Request Body" body=""
	I1222 22:54:40.749797  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:40.750133  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:41.249799  146734 type.go:165] "Request Body" body=""
	I1222 22:54:41.249873  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:41.250194  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:41.749789  146734 type.go:165] "Request Body" body=""
	I1222 22:54:41.749883  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:41.750225  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:41.750287  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:42.249727  146734 type.go:165] "Request Body" body=""
	I1222 22:54:42.249800  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:42.250096  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:42.750174  146734 type.go:165] "Request Body" body=""
	I1222 22:54:42.750257  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:42.750651  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:43.250236  146734 type.go:165] "Request Body" body=""
	I1222 22:54:43.250313  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:43.250694  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:43.750289  146734 type.go:165] "Request Body" body=""
	I1222 22:54:43.750355  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:43.750686  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:43.750759  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:44.250285  146734 type.go:165] "Request Body" body=""
	I1222 22:54:44.250357  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:44.250709  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:44.750302  146734 type.go:165] "Request Body" body=""
	I1222 22:54:44.750384  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:44.750746  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:45.250350  146734 type.go:165] "Request Body" body=""
	I1222 22:54:45.250430  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:45.250764  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:45.750440  146734 type.go:165] "Request Body" body=""
	I1222 22:54:45.750515  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:45.750874  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:45.750949  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:46.250491  146734 type.go:165] "Request Body" body=""
	I1222 22:54:46.250567  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:46.250913  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:46.750576  146734 type.go:165] "Request Body" body=""
	I1222 22:54:46.750673  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:46.751016  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:47.249570  146734 type.go:165] "Request Body" body=""
	I1222 22:54:47.249660  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:47.249996  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:47.749731  146734 type.go:165] "Request Body" body=""
	I1222 22:54:47.749824  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:47.750169  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:48.249741  146734 type.go:165] "Request Body" body=""
	I1222 22:54:48.249822  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:48.250159  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:48.250226  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:48.749795  146734 type.go:165] "Request Body" body=""
	I1222 22:54:48.749894  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:48.750223  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:49.249848  146734 type.go:165] "Request Body" body=""
	I1222 22:54:49.249934  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:49.250267  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:49.749720  146734 type.go:165] "Request Body" body=""
	I1222 22:54:49.749804  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:49.750126  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:50.249670  146734 type.go:165] "Request Body" body=""
	I1222 22:54:50.249750  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:50.250079  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:50.749692  146734 type.go:165] "Request Body" body=""
	I1222 22:54:50.749766  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:50.750099  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:50.750164  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:51.249654  146734 type.go:165] "Request Body" body=""
	I1222 22:54:51.249734  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:51.250056  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:51.749698  146734 type.go:165] "Request Body" body=""
	I1222 22:54:51.749776  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:51.750104  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:52.249708  146734 type.go:165] "Request Body" body=""
	I1222 22:54:52.249803  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:52.250143  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:52.750168  146734 type.go:165] "Request Body" body=""
	I1222 22:54:52.750240  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:52.750636  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:52.750725  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:53.250283  146734 type.go:165] "Request Body" body=""
	I1222 22:54:53.250369  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:53.250696  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:53.750388  146734 type.go:165] "Request Body" body=""
	I1222 22:54:53.750469  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:53.750824  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:54.250460  146734 type.go:165] "Request Body" body=""
	I1222 22:54:54.250538  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:54.250871  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:54.750538  146734 type.go:165] "Request Body" body=""
	I1222 22:54:54.750645  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:54.750985  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:54.751057  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:55.249649  146734 type.go:165] "Request Body" body=""
	I1222 22:54:55.249732  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:55.250057  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:55.749707  146734 type.go:165] "Request Body" body=""
	I1222 22:54:55.749790  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:55.750108  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:56.249688  146734 type.go:165] "Request Body" body=""
	I1222 22:54:56.249766  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:56.250095  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:56.749726  146734 type.go:165] "Request Body" body=""
	I1222 22:54:56.749805  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:56.750146  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:57.249569  146734 type.go:165] "Request Body" body=""
	I1222 22:54:57.249701  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:57.250071  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:57.250148  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:57.750019  146734 type.go:165] "Request Body" body=""
	I1222 22:54:57.750094  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:57.750442  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:58.249983  146734 type.go:165] "Request Body" body=""
	I1222 22:54:58.250056  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:58.250388  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:58.749735  146734 type.go:165] "Request Body" body=""
	I1222 22:54:58.749813  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:58.750122  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:59.249693  146734 type.go:165] "Request Body" body=""
	I1222 22:54:59.249770  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:59.250112  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:59.250182  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:59.749735  146734 type.go:165] "Request Body" body=""
	I1222 22:54:59.749823  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:59.750156  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:00.249747  146734 type.go:165] "Request Body" body=""
	I1222 22:55:00.249832  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:00.250162  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:00.749730  146734 type.go:165] "Request Body" body=""
	I1222 22:55:00.749817  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:00.750139  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:01.249720  146734 type.go:165] "Request Body" body=""
	I1222 22:55:01.249796  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:01.250170  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:01.250234  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:01.749747  146734 type.go:165] "Request Body" body=""
	I1222 22:55:01.749834  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:01.750177  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:02.249773  146734 type.go:165] "Request Body" body=""
	I1222 22:55:02.249853  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:02.250190  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:02.750185  146734 type.go:165] "Request Body" body=""
	I1222 22:55:02.750272  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:02.750679  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:03.250410  146734 type.go:165] "Request Body" body=""
	I1222 22:55:03.250484  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:03.250800  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:03.250864  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:03.750490  146734 type.go:165] "Request Body" body=""
	I1222 22:55:03.750574  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:03.750953  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:04.250588  146734 type.go:165] "Request Body" body=""
	I1222 22:55:04.250700  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:04.251072  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:04.749564  146734 type.go:165] "Request Body" body=""
	I1222 22:55:04.749679  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:04.750072  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:05.249662  146734 type.go:165] "Request Body" body=""
	I1222 22:55:05.249748  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:05.250095  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:05.749749  146734 type.go:165] "Request Body" body=""
	I1222 22:55:05.749828  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:05.750162  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:05.750227  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:06.249723  146734 type.go:165] "Request Body" body=""
	I1222 22:55:06.249815  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:06.250126  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:06.749723  146734 type.go:165] "Request Body" body=""
	I1222 22:55:06.749809  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:06.750146  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:07.249732  146734 type.go:165] "Request Body" body=""
	I1222 22:55:07.249819  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:07.250155  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:07.749808  146734 type.go:165] "Request Body" body=""
	I1222 22:55:07.749885  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:07.750206  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:07.750279  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:08.249799  146734 type.go:165] "Request Body" body=""
	I1222 22:55:08.249870  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:08.250200  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:08.749784  146734 type.go:165] "Request Body" body=""
	I1222 22:55:08.749858  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:08.750157  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:09.249830  146734 type.go:165] "Request Body" body=""
	I1222 22:55:09.249933  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:09.250259  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:09.749895  146734 type.go:165] "Request Body" body=""
	I1222 22:55:09.749973  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:09.750307  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:09.750375  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:10.249899  146734 type.go:165] "Request Body" body=""
	I1222 22:55:10.249974  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:10.250316  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:10.749711  146734 type.go:165] "Request Body" body=""
	I1222 22:55:10.749786  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:10.750166  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:11.249748  146734 type.go:165] "Request Body" body=""
	I1222 22:55:11.249819  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:11.250148  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:11.749856  146734 type.go:165] "Request Body" body=""
	I1222 22:55:11.749994  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:11.750335  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:12.249957  146734 type.go:165] "Request Body" body=""
	I1222 22:55:12.250025  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:12.250328  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:12.250386  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:12.750340  146734 type.go:165] "Request Body" body=""
	I1222 22:55:12.750433  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:12.750899  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:13.250509  146734 type.go:165] "Request Body" body=""
	I1222 22:55:13.250620  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:13.250953  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:13.750574  146734 type.go:165] "Request Body" body=""
	I1222 22:55:13.750664  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:13.750913  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:14.249616  146734 type.go:165] "Request Body" body=""
	I1222 22:55:14.249702  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:14.250052  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:14.749677  146734 type.go:165] "Request Body" body=""
	I1222 22:55:14.749762  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:14.750119  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:14.750178  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:15.249707  146734 type.go:165] "Request Body" body=""
	I1222 22:55:15.249782  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:15.250113  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:15.749689  146734 type.go:165] "Request Body" body=""
	I1222 22:55:15.749763  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:15.750110  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:16.249711  146734 type.go:165] "Request Body" body=""
	I1222 22:55:16.249796  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:16.250119  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:16.749704  146734 type.go:165] "Request Body" body=""
	I1222 22:55:16.749779  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:16.750098  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:17.249718  146734 type.go:165] "Request Body" body=""
	I1222 22:55:17.249799  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:17.250129  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:17.250204  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:17.749909  146734 type.go:165] "Request Body" body=""
	I1222 22:55:17.750010  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:17.750386  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:18.250005  146734 type.go:165] "Request Body" body=""
	I1222 22:55:18.250097  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:18.250519  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:18.750237  146734 type.go:165] "Request Body" body=""
	I1222 22:55:18.750318  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:18.750671  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:19.250360  146734 type.go:165] "Request Body" body=""
	I1222 22:55:19.250435  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:19.250782  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:19.250850  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:19.750393  146734 type.go:165] "Request Body" body=""
	I1222 22:55:19.750473  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:19.750812  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:20.250244  146734 type.go:165] "Request Body" body=""
	I1222 22:55:20.250340  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:20.250766  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:20.749617  146734 type.go:165] "Request Body" body=""
	I1222 22:55:20.749731  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:20.750232  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:21.249738  146734 type.go:165] "Request Body" body=""
	I1222 22:55:21.249818  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:21.250145  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:21.749880  146734 type.go:165] "Request Body" body=""
	I1222 22:55:21.749962  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:21.750345  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:21.750422  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:22.249651  146734 type.go:165] "Request Body" body=""
	I1222 22:55:22.249745  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:22.250098  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:22.750064  146734 type.go:165] "Request Body" body=""
	I1222 22:55:22.750133  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:22.750512  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:23.250053  146734 type.go:165] "Request Body" body=""
	I1222 22:55:23.250126  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:23.250447  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:23.750003  146734 type.go:165] "Request Body" body=""
	I1222 22:55:23.750079  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:23.750487  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:23.750580  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:24.249755  146734 type.go:165] "Request Body" body=""
	I1222 22:55:24.249827  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:24.250165  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:24.749758  146734 type.go:165] "Request Body" body=""
	I1222 22:55:24.749828  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:24.750192  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:25.249696  146734 type.go:165] "Request Body" body=""
	I1222 22:55:25.249769  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:25.250072  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:25.749697  146734 type.go:165] "Request Body" body=""
	I1222 22:55:25.749783  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:25.750159  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:26.249886  146734 type.go:165] "Request Body" body=""
	I1222 22:55:26.249958  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:26.250275  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:26.250336  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:26.750067  146734 type.go:165] "Request Body" body=""
	I1222 22:55:26.750154  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:26.750517  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:27.250329  146734 type.go:165] "Request Body" body=""
	I1222 22:55:27.250410  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:27.250697  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:27.750580  146734 type.go:165] "Request Body" body=""
	I1222 22:55:27.750669  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:27.751022  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:28.249726  146734 type.go:165] "Request Body" body=""
	I1222 22:55:28.249808  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:28.250109  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:28.749730  146734 type.go:165] "Request Body" body=""
	I1222 22:55:28.749811  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:28.750156  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:28.750237  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:29.249909  146734 type.go:165] "Request Body" body=""
	I1222 22:55:29.249982  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:29.250305  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:29.750035  146734 type.go:165] "Request Body" body=""
	I1222 22:55:29.750108  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:29.750450  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:30.250215  146734 type.go:165] "Request Body" body=""
	I1222 22:55:30.250285  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:30.250646  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:30.750488  146734 type.go:165] "Request Body" body=""
	I1222 22:55:30.750567  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:30.750921  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:30.750983  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:31.249701  146734 type.go:165] "Request Body" body=""
	I1222 22:55:31.249780  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:31.250099  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:31.749724  146734 type.go:165] "Request Body" body=""
	I1222 22:55:31.749792  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:31.750145  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:32.249871  146734 type.go:165] "Request Body" body=""
	I1222 22:55:32.249942  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:32.250315  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:32.750300  146734 type.go:165] "Request Body" body=""
	I1222 22:55:32.750384  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:32.750771  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:33.250608  146734 type.go:165] "Request Body" body=""
	I1222 22:55:33.250702  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:33.251142  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:33.251214  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:33.749937  146734 type.go:165] "Request Body" body=""
	I1222 22:55:33.750014  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:33.750358  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:34.250212  146734 type.go:165] "Request Body" body=""
	I1222 22:55:34.250294  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:34.250661  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:34.750449  146734 type.go:165] "Request Body" body=""
	I1222 22:55:34.750521  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:34.750895  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:35.249645  146734 type.go:165] "Request Body" body=""
	I1222 22:55:35.249716  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:35.250054  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:35.749839  146734 type.go:165] "Request Body" body=""
	I1222 22:55:35.749916  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:35.750258  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:35.750321  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:36.249744  146734 type.go:165] "Request Body" body=""
	I1222 22:55:36.249815  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:36.250128  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:36.749704  146734 type.go:165] "Request Body" body=""
	I1222 22:55:36.749780  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:36.750111  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:37.249872  146734 type.go:165] "Request Body" body=""
	I1222 22:55:37.249947  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:37.250270  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:37.750107  146734 type.go:165] "Request Body" body=""
	I1222 22:55:37.750195  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:37.750537  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:37.750607  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:38.250386  146734 type.go:165] "Request Body" body=""
	I1222 22:55:38.250473  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:38.250844  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:38.749618  146734 type.go:165] "Request Body" body=""
	I1222 22:55:38.749699  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:38.750037  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:39.249712  146734 type.go:165] "Request Body" body=""
	I1222 22:55:39.249799  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:39.250114  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:39.749869  146734 type.go:165] "Request Body" body=""
	I1222 22:55:39.749958  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:39.750319  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:40.249717  146734 type.go:165] "Request Body" body=""
	I1222 22:55:40.249789  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:40.250125  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:40.250192  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:40.749723  146734 type.go:165] "Request Body" body=""
	I1222 22:55:40.749805  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:40.750122  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:41.249867  146734 type.go:165] "Request Body" body=""
	I1222 22:55:41.249964  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:41.250300  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:41.750030  146734 type.go:165] "Request Body" body=""
	I1222 22:55:41.750104  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:41.750449  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:42.250237  146734 type.go:165] "Request Body" body=""
	I1222 22:55:42.250316  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:42.250702  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:42.250790  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:42.749698  146734 type.go:165] "Request Body" body=""
	I1222 22:55:42.749778  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:42.750156  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:43.249908  146734 type.go:165] "Request Body" body=""
	I1222 22:55:43.249984  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:43.250316  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:43.749713  146734 type.go:165] "Request Body" body=""
	I1222 22:55:43.749783  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:43.750109  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:44.249847  146734 type.go:165] "Request Body" body=""
	I1222 22:55:44.249920  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:44.250250  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:44.749977  146734 type.go:165] "Request Body" body=""
	I1222 22:55:44.750059  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:44.750429  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:44.750493  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:45.250259  146734 type.go:165] "Request Body" body=""
	I1222 22:55:45.250340  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:45.250671  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:45.750529  146734 type.go:165] "Request Body" body=""
	I1222 22:55:45.750635  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:45.750999  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:46.249762  146734 type.go:165] "Request Body" body=""
	I1222 22:55:46.249839  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:46.250179  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:46.749734  146734 type.go:165] "Request Body" body=""
	I1222 22:55:46.749810  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:46.750159  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:47.249888  146734 type.go:165] "Request Body" body=""
	I1222 22:55:47.249964  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:47.250293  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:47.250367  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:47.750084  146734 type.go:165] "Request Body" body=""
	I1222 22:55:47.750158  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:47.750514  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:48.250329  146734 type.go:165] "Request Body" body=""
	I1222 22:55:48.250397  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:48.250735  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:48.750528  146734 type.go:165] "Request Body" body=""
	I1222 22:55:48.750629  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:48.750960  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:49.249711  146734 type.go:165] "Request Body" body=""
	I1222 22:55:49.249788  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:49.250109  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:49.749756  146734 type.go:165] "Request Body" body=""
	I1222 22:55:49.749840  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:49.750202  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:49.750280  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:50.250567  146734 type.go:165] "Request Body" body=""
	I1222 22:55:50.250668  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:50.251016  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:50.749759  146734 type.go:165] "Request Body" body=""
	I1222 22:55:50.749830  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:50.750146  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:51.249720  146734 type.go:165] "Request Body" body=""
	I1222 22:55:51.249810  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:51.250134  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:51.749846  146734 type.go:165] "Request Body" body=""
	I1222 22:55:51.749921  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:51.750215  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:52.249929  146734 type.go:165] "Request Body" body=""
	I1222 22:55:52.250026  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:52.250348  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:52.250418  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:52.750253  146734 type.go:165] "Request Body" body=""
	I1222 22:55:52.750351  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:52.750714  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:53.250551  146734 type.go:165] "Request Body" body=""
	I1222 22:55:53.250675  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:53.251019  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:53.749793  146734 type.go:165] "Request Body" body=""
	I1222 22:55:53.749867  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:53.750205  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:54.249741  146734 type.go:165] "Request Body" body=""
	I1222 22:55:54.249830  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:54.250172  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:54.749935  146734 type.go:165] "Request Body" body=""
	I1222 22:55:54.750018  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:54.750343  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:54.750406  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:55.250174  146734 type.go:165] "Request Body" body=""
	I1222 22:55:55.250255  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:55.250582  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:55.750396  146734 type.go:165] "Request Body" body=""
	I1222 22:55:55.750466  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:55.750800  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:56.250589  146734 type.go:165] "Request Body" body=""
	I1222 22:55:56.250683  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:56.251003  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:56.749740  146734 type.go:165] "Request Body" body=""
	I1222 22:55:56.749822  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:56.750149  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:57.249700  146734 type.go:165] "Request Body" body=""
	I1222 22:55:57.249767  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:57.250019  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:57.250065  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:57.749911  146734 type.go:165] "Request Body" body=""
	I1222 22:55:57.749984  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:57.750312  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:58.250079  146734 type.go:165] "Request Body" body=""
	I1222 22:55:58.250158  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:58.250482  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:58.749773  146734 type.go:165] "Request Body" body=""
	I1222 22:55:58.749843  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:58.750137  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:59.249988  146734 type.go:165] "Request Body" body=""
	I1222 22:55:59.250074  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:59.250366  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:59.250416  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:59.750170  146734 type.go:165] "Request Body" body=""
	I1222 22:55:59.750247  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:59.750648  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:00.250540  146734 type.go:165] "Request Body" body=""
	I1222 22:56:00.250654  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:00.250961  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:00.749739  146734 type.go:165] "Request Body" body=""
	I1222 22:56:00.749823  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:00.750177  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:01.249895  146734 type.go:165] "Request Body" body=""
	I1222 22:56:01.249986  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:01.250340  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:01.750038  146734 type.go:165] "Request Body" body=""
	I1222 22:56:01.750113  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:01.750490  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:01.750581  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:02.250443  146734 type.go:165] "Request Body" body=""
	I1222 22:56:02.250525  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:02.250895  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:02.749821  146734 type.go:165] "Request Body" body=""
	I1222 22:56:02.749917  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:02.750273  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:03.250163  146734 type.go:165] "Request Body" body=""
	I1222 22:56:03.250251  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:03.250624  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:03.750411  146734 type.go:165] "Request Body" body=""
	I1222 22:56:03.750512  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:03.750893  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:03.750957  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:04.249658  146734 type.go:165] "Request Body" body=""
	I1222 22:56:04.249737  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:04.250088  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:04.749706  146734 type.go:165] "Request Body" body=""
	I1222 22:56:04.749798  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:04.750106  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:05.249956  146734 type.go:165] "Request Body" body=""
	I1222 22:56:05.250031  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:05.250365  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:05.750235  146734 type.go:165] "Request Body" body=""
	I1222 22:56:05.750322  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:05.750688  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:06.250487  146734 type.go:165] "Request Body" body=""
	I1222 22:56:06.250559  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:06.250842  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:06.250893  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:06.749620  146734 type.go:165] "Request Body" body=""
	I1222 22:56:06.749705  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:06.750048  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:07.249785  146734 type.go:165] "Request Body" body=""
	I1222 22:56:07.249875  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:07.250216  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:07.750003  146734 type.go:165] "Request Body" body=""
	I1222 22:56:07.750078  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:07.750408  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:08.250236  146734 type.go:165] "Request Body" body=""
	I1222 22:56:08.250327  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:08.250694  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:08.750527  146734 type.go:165] "Request Body" body=""
	I1222 22:56:08.750631  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:08.750990  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:08.751068  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:09.249747  146734 type.go:165] "Request Body" body=""
	I1222 22:56:09.249842  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:09.250172  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:09.749886  146734 type.go:165] "Request Body" body=""
	I1222 22:56:09.749962  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:09.750308  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:10.249647  146734 type.go:165] "Request Body" body=""
	I1222 22:56:10.249737  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:10.250057  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:10.749709  146734 type.go:165] "Request Body" body=""
	I1222 22:56:10.749783  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:10.750127  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:11.249845  146734 type.go:165] "Request Body" body=""
	I1222 22:56:11.249934  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:11.250260  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:11.250328  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:11.749676  146734 type.go:165] "Request Body" body=""
	I1222 22:56:11.749752  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:11.750075  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:12.249809  146734 type.go:165] "Request Body" body=""
	I1222 22:56:12.249905  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:12.250221  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:12.750192  146734 type.go:165] "Request Body" body=""
	I1222 22:56:12.750274  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:12.750624  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:13.250511  146734 type.go:165] "Request Body" body=""
	I1222 22:56:13.250619  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:13.250949  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:13.251021  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:13.749706  146734 type.go:165] "Request Body" body=""
	I1222 22:56:13.749791  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:13.750112  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:14.249837  146734 type.go:165] "Request Body" body=""
	I1222 22:56:14.249926  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:14.250269  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:14.749994  146734 type.go:165] "Request Body" body=""
	I1222 22:56:14.750085  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:14.750445  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:15.250241  146734 type.go:165] "Request Body" body=""
	I1222 22:56:15.250328  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:15.250679  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:15.750491  146734 type.go:165] "Request Body" body=""
	I1222 22:56:15.750582  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:15.750940  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:15.751003  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:16.249702  146734 type.go:165] "Request Body" body=""
	I1222 22:56:16.249792  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:16.250131  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:16.749842  146734 type.go:165] "Request Body" body=""
	I1222 22:56:16.749926  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:16.750230  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:17.249944  146734 type.go:165] "Request Body" body=""
	I1222 22:56:17.250027  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:17.250345  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:17.750037  146734 type.go:165] "Request Body" body=""
	I1222 22:56:17.750126  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:17.750464  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:18.249806  146734 type.go:165] "Request Body" body=""
	I1222 22:56:18.249894  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:18.250221  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:18.250280  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:18.749968  146734 type.go:165] "Request Body" body=""
	I1222 22:56:18.750042  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:18.750375  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:19.250188  146734 type.go:165] "Request Body" body=""
	I1222 22:56:19.250274  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:19.250616  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:19.750444  146734 type.go:165] "Request Body" body=""
	I1222 22:56:19.750535  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:19.750879  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:20.249610  146734 type.go:165] "Request Body" body=""
	I1222 22:56:20.249709  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:20.250048  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:20.749785  146734 type.go:165] "Request Body" body=""
	I1222 22:56:20.749875  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:20.750205  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:20.750279  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:21.249734  146734 type.go:165] "Request Body" body=""
	I1222 22:56:21.249827  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:21.250200  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:21.749682  146734 type.go:165] "Request Body" body=""
	I1222 22:56:21.749765  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:21.750114  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:22.249836  146734 type.go:165] "Request Body" body=""
	I1222 22:56:22.249924  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:22.250264  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:22.750260  146734 type.go:165] "Request Body" body=""
	I1222 22:56:22.750374  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:22.750715  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:22.750789  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:23.250585  146734 type.go:165] "Request Body" body=""
	I1222 22:56:23.250698  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:23.251037  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:23.749764  146734 type.go:165] "Request Body" body=""
	I1222 22:56:23.749845  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:23.750177  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:24.249961  146734 type.go:165] "Request Body" body=""
	I1222 22:56:24.250049  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:24.250374  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:24.750225  146734 type.go:165] "Request Body" body=""
	I1222 22:56:24.750310  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:24.750702  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:25.250611  146734 type.go:165] "Request Body" body=""
	I1222 22:56:25.250705  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:25.251045  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:25.251126  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:25.749728  146734 type.go:165] "Request Body" body=""
	I1222 22:56:25.749801  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:25.750109  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:26.249827  146734 type.go:165] "Request Body" body=""
	I1222 22:56:26.249913  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:26.250265  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:26.750059  146734 type.go:165] "Request Body" body=""
	I1222 22:56:26.750142  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:26.750486  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:27.250307  146734 type.go:165] "Request Body" body=""
	I1222 22:56:27.250390  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:27.250801  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:27.749556  146734 type.go:165] "Request Body" body=""
	I1222 22:56:27.749670  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:27.749997  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:27.750062  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:28.249747  146734 type.go:165] "Request Body" body=""
	I1222 22:56:28.249843  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:28.250181  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:28.749928  146734 type.go:165] "Request Body" body=""
	I1222 22:56:28.750013  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:28.750333  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:29.250176  146734 type.go:165] "Request Body" body=""
	I1222 22:56:29.250253  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:29.250628  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:29.750430  146734 type.go:165] "Request Body" body=""
	I1222 22:56:29.750516  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:29.750880  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:29.750941  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:30.249655  146734 type.go:165] "Request Body" body=""
	I1222 22:56:30.249745  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:30.250057  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:30.749762  146734 type.go:165] "Request Body" body=""
	I1222 22:56:30.749841  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:30.750176  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:31.249900  146734 type.go:165] "Request Body" body=""
	I1222 22:56:31.249991  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:31.250332  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:31.749728  146734 type.go:165] "Request Body" body=""
	I1222 22:56:31.749800  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:31.750161  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:32.249910  146734 type.go:165] "Request Body" body=""
	I1222 22:56:32.249987  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:32.250372  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:32.250439  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:32.750202  146734 type.go:165] "Request Body" body=""
	I1222 22:56:32.750290  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:32.750667  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:33.250449  146734 type.go:165] "Request Body" body=""
	I1222 22:56:33.250524  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:33.250855  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:33.749570  146734 type.go:165] "Request Body" body=""
	I1222 22:56:33.749674  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:33.750002  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:34.249728  146734 type.go:165] "Request Body" body=""
	I1222 22:56:34.249812  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:34.250144  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:34.749746  146734 type.go:165] "Request Body" body=""
	I1222 22:56:34.749820  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:34.750119  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:34.750176  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:35.249857  146734 type.go:165] "Request Body" body=""
	I1222 22:56:35.249936  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:35.250259  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:35.750048  146734 type.go:165] "Request Body" body=""
	I1222 22:56:35.750143  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:35.750524  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:36.250349  146734 type.go:165] "Request Body" body=""
	I1222 22:56:36.250426  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:36.250769  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:36.750203  146734 type.go:165] "Request Body" body=""
	I1222 22:56:36.750279  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:36.750663  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:36.750725  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:37.250497  146734 type.go:165] "Request Body" body=""
	I1222 22:56:37.250572  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:37.251039  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:37.749807  146734 type.go:165] "Request Body" body=""
	I1222 22:56:37.749884  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:37.750203  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:38.249961  146734 type.go:165] "Request Body" body=""
	I1222 22:56:38.250044  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:38.250402  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:38.750242  146734 type.go:165] "Request Body" body=""
	I1222 22:56:38.750337  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:38.750714  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:38.750783  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:39.249573  146734 type.go:165] "Request Body" body=""
	I1222 22:56:39.249685  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:39.250006  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:39.749755  146734 type.go:165] "Request Body" body=""
	I1222 22:56:39.749837  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:39.750176  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:40.249985  146734 type.go:165] "Request Body" body=""
	I1222 22:56:40.250068  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:40.250462  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:40.749791  146734 type.go:165] "Request Body" body=""
	I1222 22:56:40.749864  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:40.750168  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:41.249972  146734 type.go:165] "Request Body" body=""
	I1222 22:56:41.250067  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:41.250448  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:41.250511  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:41.750285  146734 type.go:165] "Request Body" body=""
	I1222 22:56:41.750360  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:41.750722  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:42.250530  146734 type.go:165] "Request Body" body=""
	I1222 22:56:42.250634  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:42.251005  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:42.750176  146734 type.go:165] "Request Body" body=""
	I1222 22:56:42.750262  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:42.750629  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:43.250421  146734 type.go:165] "Request Body" body=""
	I1222 22:56:43.250500  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:43.250876  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:43.250943  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:43.749662  146734 type.go:165] "Request Body" body=""
	I1222 22:56:43.749749  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:43.750068  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:44.249733  146734 type.go:165] "Request Body" body=""
	I1222 22:56:44.249816  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:44.250156  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:44.749892  146734 type.go:165] "Request Body" body=""
	I1222 22:56:44.749976  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:44.750375  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:45.249835  146734 type.go:165] "Request Body" body=""
	I1222 22:56:45.249925  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:45.250244  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:45.750013  146734 type.go:165] "Request Body" body=""
	I1222 22:56:45.750091  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:45.750446  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:45.750514  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:46.250285  146734 type.go:165] "Request Body" body=""
	I1222 22:56:46.250360  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:46.250755  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:46.749559  146734 type.go:165] "Request Body" body=""
	I1222 22:56:46.749670  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:46.750010  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:47.249802  146734 type.go:165] "Request Body" body=""
	I1222 22:56:47.249878  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:47.250237  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:47.749902  146734 type.go:165] "Request Body" body=""
	I1222 22:56:47.750019  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:47.750394  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:48.250358  146734 type.go:165] "Request Body" body=""
	I1222 22:56:48.250462  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:48.250882  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:48.250970  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:48.749733  146734 type.go:165] "Request Body" body=""
	I1222 22:56:48.749822  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:48.750138  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:49.249857  146734 type.go:165] "Request Body" body=""
	I1222 22:56:49.249934  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:49.250272  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:49.750058  146734 type.go:165] "Request Body" body=""
	I1222 22:56:49.750161  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:49.750546  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:50.250534  146734 type.go:165] "Request Body" body=""
	I1222 22:56:50.250637  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:50.250970  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:50.251043  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:50.749768  146734 type.go:165] "Request Body" body=""
	I1222 22:56:50.749857  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:50.750193  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:51.250007  146734 type.go:165] "Request Body" body=""
	I1222 22:56:51.250097  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:51.250480  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:51.750285  146734 type.go:165] "Request Body" body=""
	I1222 22:56:51.750367  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:51.750694  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:52.250507  146734 type.go:165] "Request Body" body=""
	I1222 22:56:52.250580  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:52.250924  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:52.749846  146734 type.go:165] "Request Body" body=""
	I1222 22:56:52.749917  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:52.750231  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:52.750289  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:53.250037  146734 type.go:165] "Request Body" body=""
	I1222 22:56:53.250116  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:53.250450  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:53.750301  146734 type.go:165] "Request Body" body=""
	I1222 22:56:53.750379  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:53.750711  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:54.250570  146734 type.go:165] "Request Body" body=""
	I1222 22:56:54.250683  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:54.251037  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:54.749778  146734 type.go:165] "Request Body" body=""
	I1222 22:56:54.749870  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:54.750212  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:55.249925  146734 type.go:165] "Request Body" body=""
	I1222 22:56:55.250005  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:55.250325  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:55.250383  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:55.750056  146734 type.go:165] "Request Body" body=""
	I1222 22:56:55.750129  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:55.750431  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:56.250241  146734 type.go:165] "Request Body" body=""
	I1222 22:56:56.250315  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:56.250691  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:56.750506  146734 type.go:165] "Request Body" body=""
	I1222 22:56:56.750582  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:56.750942  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:57.249670  146734 type.go:165] "Request Body" body=""
	I1222 22:56:57.249740  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:57.250040  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:57.749841  146734 type.go:165] "Request Body" body=""
	I1222 22:56:57.749915  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:57.750251  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:57.750326  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:58.249993  146734 type.go:165] "Request Body" body=""
	I1222 22:56:58.250069  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:58.250401  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:58.749827  146734 type.go:165] "Request Body" body=""
	I1222 22:56:58.749905  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:58.750192  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:59.249907  146734 type.go:165] "Request Body" body=""
	I1222 22:56:59.249979  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:59.250306  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:59.750056  146734 type.go:165] "Request Body" body=""
	I1222 22:56:59.750136  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:59.750491  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:59.750565  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:57:00.250323  146734 type.go:165] "Request Body" body=""
	I1222 22:57:00.250408  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:00.250755  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:00.749631  146734 type.go:165] "Request Body" body=""
	I1222 22:57:00.749707  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:00.750021  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:01.249677  146734 type.go:165] "Request Body" body=""
	I1222 22:57:01.249762  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:01.250096  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:01.749829  146734 type.go:165] "Request Body" body=""
	I1222 22:57:01.749919  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:01.750234  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:02.249941  146734 type.go:165] "Request Body" body=""
	I1222 22:57:02.250014  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:02.250348  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:57:02.250421  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:57:02.750212  146734 type.go:165] "Request Body" body=""
	I1222 22:57:02.750300  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:02.750654  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:03.250450  146734 type.go:165] "Request Body" body=""
	I1222 22:57:03.250517  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:03.250851  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:03.749576  146734 type.go:165] "Request Body" body=""
	I1222 22:57:03.749665  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:03.749988  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:04.249767  146734 type.go:165] "Request Body" body=""
	I1222 22:57:04.249842  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:04.250173  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:04.750033  146734 type.go:165] "Request Body" body=""
	I1222 22:57:04.750117  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:04.750502  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:57:04.750569  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:57:05.250312  146734 type.go:165] "Request Body" body=""
	I1222 22:57:05.250398  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:05.250762  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:05.750575  146734 type.go:165] "Request Body" body=""
	I1222 22:57:05.750666  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:05.751012  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:06.249706  146734 type.go:165] "Request Body" body=""
	I1222 22:57:06.249781  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:06.250093  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:06.749823  146734 type.go:165] "Request Body" body=""
	I1222 22:57:06.749898  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:06.750282  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:07.250051  146734 type.go:165] "Request Body" body=""
	I1222 22:57:07.250124  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:07.250473  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:57:07.250533  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:57:07.750214  146734 type.go:165] "Request Body" body=""
	I1222 22:57:07.750298  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:07.750580  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:08.250395  146734 type.go:165] "Request Body" body=""
	I1222 22:57:08.250486  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:08.250871  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:08.749688  146734 type.go:165] "Request Body" body=""
	I1222 22:57:08.749763  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:08.750089  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:09.249707  146734 type.go:165] "Request Body" body=""
	I1222 22:57:09.249788  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:09.250145  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:09.749900  146734 type.go:165] "Request Body" body=""
	I1222 22:57:09.749983  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:09.750354  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:57:09.750431  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:57:10.250176  146734 type.go:165] "Request Body" body=""
	I1222 22:57:10.250252  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:10.250587  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:10.750397  146734 type.go:165] "Request Body" body=""
	I1222 22:57:10.750472  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:10.750832  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:11.249645  146734 type.go:165] "Request Body" body=""
	I1222 22:57:11.249739  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:11.250107  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:11.749864  146734 type.go:165] "Request Body" body=""
	I1222 22:57:11.749962  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:11.750316  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:12.249663  146734 type.go:165] "Request Body" body=""
	I1222 22:57:12.249748  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:12.250105  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:57:12.250178  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:57:12.750096  146734 type.go:165] "Request Body" body=""
	I1222 22:57:12.750174  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:12.750521  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:13.250403  146734 type.go:165] "Request Body" body=""
	I1222 22:57:13.250481  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:13.250854  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:13.749624  146734 type.go:165] "Request Body" body=""
	I1222 22:57:13.749717  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:13.750062  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:14.249768  146734 type.go:165] "Request Body" body=""
	I1222 22:57:14.249842  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:14.250173  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:57:14.250237  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:57:14.749931  146734 type.go:165] "Request Body" body=""
	I1222 22:57:14.750016  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:14.750331  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:15.250077  146734 type.go:165] "Request Body" body=""
	I1222 22:57:15.250149  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:15.250459  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:15.750281  146734 type.go:165] "Request Body" body=""
	I1222 22:57:15.750366  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:15.750687  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:16.250490  146734 type.go:165] "Request Body" body=""
	I1222 22:57:16.250582  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:16.250949  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:57:16.251012  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:57:16.749742  146734 type.go:165] "Request Body" body=""
	I1222 22:57:16.749829  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:16.750173  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:17.249915  146734 type.go:165] "Request Body" body=""
	I1222 22:57:17.250011  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:17.250323  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:17.750089  146734 type.go:165] "Request Body" body=""
	I1222 22:57:17.750167  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:17.750505  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:18.250322  146734 type.go:165] "Request Body" body=""
	I1222 22:57:18.250402  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:18.250802  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:18.749588  146734 type.go:165] "Request Body" body=""
	I1222 22:57:18.749719  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:18.750078  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:57:18.750142  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:57:19.249857  146734 type.go:165] "Request Body" body=""
	I1222 22:57:19.249933  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:19.250256  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:19.749770  146734 type.go:165] "Request Body" body=""
	I1222 22:57:19.749854  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:19.750208  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:20.249748  146734 type.go:165] "Request Body" body=""
	I1222 22:57:20.249831  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:20.250157  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:20.750100  146734 type.go:165] "Request Body" body=""
	I1222 22:57:20.750194  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:20.750870  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:57:20.750943  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:57:21.249638  146734 type.go:165] "Request Body" body=""
	I1222 22:57:21.249718  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:21.250054  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:21.749626  146734 type.go:165] "Request Body" body=""
	I1222 22:57:21.749701  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:21.750025  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:22.249683  146734 type.go:165] "Request Body" body=""
	I1222 22:57:22.249766  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:22.250110  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:22.750117  146734 node_ready.go:38] duration metric: took 6m0.000675026s for node "functional-384766" to be "Ready" ...
	I1222 22:57:22.752685  146734 out.go:203] 
	W1222 22:57:22.753745  146734 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1222 22:57:22.753760  146734 out.go:285] * 
	W1222 22:57:22.753991  146734 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 22:57:22.755053  146734 out.go:203] 
	
	
	==> Docker <==
	Dec 22 22:51:20 functional-384766 dockerd[9922]: time="2025-12-22T22:51:20.767074805Z" level=info msg="Loading containers: done."
	Dec 22 22:51:20 functional-384766 dockerd[9922]: time="2025-12-22T22:51:20.776636662Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 22 22:51:20 functional-384766 dockerd[9922]: time="2025-12-22T22:51:20.776667996Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 22 22:51:20 functional-384766 dockerd[9922]: time="2025-12-22T22:51:20.776702670Z" level=info msg="Initializing buildkit"
	Dec 22 22:51:20 functional-384766 dockerd[9922]: time="2025-12-22T22:51:20.795232210Z" level=info msg="Completed buildkit initialization"
	Dec 22 22:51:20 functional-384766 dockerd[9922]: time="2025-12-22T22:51:20.799403213Z" level=info msg="Daemon has completed initialization"
	Dec 22 22:51:20 functional-384766 dockerd[9922]: time="2025-12-22T22:51:20.799466264Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 22 22:51:20 functional-384766 dockerd[9922]: time="2025-12-22T22:51:20.799532589Z" level=info msg="API listen on [::]:2376"
	Dec 22 22:51:20 functional-384766 dockerd[9922]: time="2025-12-22T22:51:20.799493974Z" level=info msg="API listen on /run/docker.sock"
	Dec 22 22:51:20 functional-384766 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 22 22:51:20 functional-384766 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 22 22:51:20 functional-384766 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 22 22:51:20 functional-384766 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 22 22:51:21 functional-384766 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 22 22:51:21 functional-384766 cri-dockerd[10237]: time="2025-12-22T22:51:21Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 22 22:51:21 functional-384766 cri-dockerd[10237]: time="2025-12-22T22:51:21Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 22 22:51:21 functional-384766 cri-dockerd[10237]: time="2025-12-22T22:51:21Z" level=info msg="Start docker client with request timeout 0s"
	Dec 22 22:51:21 functional-384766 cri-dockerd[10237]: time="2025-12-22T22:51:21Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 22 22:51:21 functional-384766 cri-dockerd[10237]: time="2025-12-22T22:51:21Z" level=info msg="Loaded network plugin cni"
	Dec 22 22:51:21 functional-384766 cri-dockerd[10237]: time="2025-12-22T22:51:21Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 22 22:51:21 functional-384766 cri-dockerd[10237]: time="2025-12-22T22:51:21Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 22 22:51:21 functional-384766 cri-dockerd[10237]: time="2025-12-22T22:51:21Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 22 22:51:21 functional-384766 cri-dockerd[10237]: time="2025-12-22T22:51:21Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 22 22:51:21 functional-384766 cri-dockerd[10237]: time="2025-12-22T22:51:21Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 22 22:51:21 functional-384766 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:57:25.961846   16628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:57:25.962417   16628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:57:25.964025   16628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:57:25.964480   16628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:57:25.966049   16628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff da 9e 7f a3 27 cb 08 06
	[  +0.239045] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6e eb f7 fd 0a 48 08 06
	[  +0.170967] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 16 5a dc 65 fc cc 08 06
	[Dec22 22:37] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 66 cb ee 90 55 2b 08 06
	[  +0.000450] IPv4: martian source 10.244.0.32 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff be 43 50 0c dd 15 08 06
	[  +0.000658] IPv4: martian source 10.244.0.32 from 10.244.0.7, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e 41 3c 76 23 2b 08 06
	[  +1.709294] IPv4: martian source 10.244.0.31 from 10.244.0.26, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff be b6 30 85 5f 4e 08 06
	[  +0.532867] IPv4: martian source 10.244.0.26 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff be 43 50 0c dd 15 08 06
	[Dec22 22:39] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 46 b7 49 09 f9 e0 08 06
	[  +0.006417] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1e e5 c5 4f 67 2b 08 06
	[Dec22 22:40] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 22 2e 10 70 70 25 08 06
	[Dec22 22:41] IPv4: martian source 10.244.0.1 from 10.244.0.6, on dev eth0
	[  +0.000034] ll header: 00000000: ff ff ff ff ff ff ee d7 ae 32 ba c5 08 06
	[Dec22 22:42] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 95 cb 2f 8e 91 08 06
	
	
	==> kernel <==
	 22:57:25 up  2:39,  0 user,  load average: 0.72, 0.31, 0.57
	Linux functional-384766 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 22:57:22 functional-384766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 22:57:23 functional-384766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 810.
	Dec 22 22:57:23 functional-384766 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 22:57:23 functional-384766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 22:57:23 functional-384766 kubelet[16319]: E1222 22:57:23.552489   16319 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 22:57:23 functional-384766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 22:57:23 functional-384766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 22:57:24 functional-384766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 811.
	Dec 22 22:57:24 functional-384766 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 22:57:24 functional-384766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 22:57:24 functional-384766 kubelet[16463]: E1222 22:57:24.290335   16463 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 22:57:24 functional-384766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 22:57:24 functional-384766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 22:57:24 functional-384766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 812.
	Dec 22 22:57:24 functional-384766 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 22:57:24 functional-384766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 22:57:25 functional-384766 kubelet[16489]: E1222 22:57:25.044488   16489 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 22:57:25 functional-384766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 22:57:25 functional-384766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 22:57:25 functional-384766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 813.
	Dec 22 22:57:25 functional-384766 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 22:57:25 functional-384766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 22:57:25 functional-384766 kubelet[16529]: E1222 22:57:25.785456   16529 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 22:57:25 functional-384766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 22:57:25 functional-384766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-384766 -n functional-384766
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-384766 -n functional-384766: exit status 2 (296.770263ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-384766" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods (1.78s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd (1.84s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 kubectl -- --context functional-384766 get pods
functional_test.go:731: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-384766 kubectl -- --context functional-384766 get pods: exit status 1 (110.863592ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:734: failed to get pods. args "out/minikube-linux-amd64 -p functional-384766 kubectl -- --context functional-384766 get pods": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-384766
helpers_test.go:244: (dbg) docker inspect functional-384766:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c",
	        "Created": "2025-12-22T22:43:03.818900502Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 134904,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T22:43:03.847527913Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9a87e850a5e640dd3e5f71477885272b970ba271e3722be8bebbe0157f704ffd",
	        "ResolvConfPath": "/var/lib/docker/containers/e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c/hostname",
	        "HostsPath": "/var/lib/docker/containers/e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c/hosts",
	        "LogPath": "/var/lib/docker/containers/e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c/e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c-json.log",
	        "Name": "/functional-384766",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-384766:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-384766",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c",
	                "LowerDir": "/var/lib/docker/overlay2/3e3d10c0ae87018d46767d6a2bb62611a8b9a288f6938e75c60f3cd57119d4bf-init/diff:/var/lib/docker/overlay2/c57dd1a41102d99c4ed6be3c60b871435428bd2cea6a3d8d172f0a67527ba009/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3e3d10c0ae87018d46767d6a2bb62611a8b9a288f6938e75c60f3cd57119d4bf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3e3d10c0ae87018d46767d6a2bb62611a8b9a288f6938e75c60f3cd57119d4bf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3e3d10c0ae87018d46767d6a2bb62611a8b9a288f6938e75c60f3cd57119d4bf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-384766",
	                "Source": "/var/lib/docker/volumes/functional-384766/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-384766",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-384766",
	                "name.minikube.sigs.k8s.io": "functional-384766",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d6f65d275ad1e1cfaea153f23b0c094464e089c27de9a12387045fa2c863e00e",
	            "SandboxKey": "/var/run/docker/netns/d6f65d275ad1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-384766": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1b177601c4f3a252e4feb1553da3a4110e40d5b9ed2bd5de6789f2bc9f8f5c2b",
	                    "EndpointID": "2c787f98c5d836612c102f7592dc2eccfef09327c2a6cadf1319fd6559b5eca8",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "d6:90:04:78:9b:e3",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-384766",
	                        "e126b999cc06"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-384766 -n functional-384766
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-384766 -n functional-384766: exit status 2 (288.99364ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-580825 ssh pgrep buildkitd                                                                                                             │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │                     │
	│ image   │ functional-580825 image ls --format json --alsologtostderr                                                                                        │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image   │ functional-580825 image ls --format short --alsologtostderr                                                                                       │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │                     │
	│ image   │ functional-580825 image ls --format yaml --alsologtostderr                                                                                        │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image   │ functional-580825 image ls --format table --alsologtostderr                                                                                       │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image   │ functional-580825 image build -t localhost/my-image:functional-580825 testdata/build --alsologtostderr                                            │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image   │ functional-580825 image ls                                                                                                                        │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ delete  │ -p functional-580825                                                                                                                              │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ start   │ -p functional-384766 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-rc.1 │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │                     │
	│ start   │ -p functional-384766 --alsologtostderr -v=8                                                                                                       │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 22:51 UTC │                     │
	│ cache   │ functional-384766 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 22:57 UTC │ 22 Dec 25 22:57 UTC │
	│ cache   │ functional-384766 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 22:57 UTC │ 22 Dec 25 22:57 UTC │
	│ cache   │ functional-384766 cache add registry.k8s.io/pause:latest                                                                                          │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 22:57 UTC │ 22 Dec 25 22:57 UTC │
	│ cache   │ functional-384766 cache add minikube-local-cache-test:functional-384766                                                                           │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 22:57 UTC │ 22 Dec 25 22:57 UTC │
	│ cache   │ functional-384766 cache delete minikube-local-cache-test:functional-384766                                                                        │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 22:57 UTC │ 22 Dec 25 22:57 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 22 Dec 25 22:57 UTC │ 22 Dec 25 22:57 UTC │
	│ cache   │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 22 Dec 25 22:57 UTC │ 22 Dec 25 22:57 UTC │
	│ ssh     │ functional-384766 ssh sudo crictl images                                                                                                          │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 22:57 UTC │ 22 Dec 25 22:57 UTC │
	│ ssh     │ functional-384766 ssh sudo docker rmi registry.k8s.io/pause:latest                                                                                │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 22:57 UTC │ 22 Dec 25 22:57 UTC │
	│ ssh     │ functional-384766 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 22:57 UTC │                     │
	│ cache   │ functional-384766 cache reload                                                                                                                    │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 22:57 UTC │ 22 Dec 25 22:57 UTC │
	│ ssh     │ functional-384766 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 22:57 UTC │ 22 Dec 25 22:57 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 22 Dec 25 22:57 UTC │ 22 Dec 25 22:57 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 22 Dec 25 22:57 UTC │ 22 Dec 25 22:57 UTC │
	│ kubectl │ functional-384766 kubectl -- --context functional-384766 get pods                                                                                 │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 22:57 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 22:51:17
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 22:51:17.565426  146734 out.go:360] Setting OutFile to fd 1 ...
	I1222 22:51:17.565716  146734 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 22:51:17.565727  146734 out.go:374] Setting ErrFile to fd 2...
	I1222 22:51:17.565732  146734 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 22:51:17.565972  146734 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-72233/.minikube/bin
	I1222 22:51:17.566463  146734 out.go:368] Setting JSON to false
	I1222 22:51:17.567434  146734 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":9218,"bootTime":1766434660,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1222 22:51:17.567486  146734 start.go:143] virtualization: kvm guest
	I1222 22:51:17.569465  146734 out.go:179] * [functional-384766] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1222 22:51:17.570460  146734 out.go:179]   - MINIKUBE_LOCATION=22301
	I1222 22:51:17.570465  146734 notify.go:221] Checking for updates...
	I1222 22:51:17.572456  146734 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 22:51:17.573608  146734 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1222 22:51:17.574791  146734 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-72233/.minikube
	I1222 22:51:17.575840  146734 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1222 22:51:17.576824  146734 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 22:51:17.578279  146734 config.go:182] Loaded profile config "functional-384766": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1222 22:51:17.578404  146734 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 22:51:17.602058  146734 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1222 22:51:17.602223  146734 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 22:51:17.652786  146734 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-12-22 22:51:17.644025132 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1222 22:51:17.652901  146734 docker.go:319] overlay module found
	I1222 22:51:17.655127  146734 out.go:179] * Using the docker driver based on existing profile
	I1222 22:51:17.656150  146734 start.go:309] selected driver: docker
	I1222 22:51:17.656165  146734 start.go:928] validating driver "docker" against &{Name:functional-384766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-384766 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 22:51:17.656249  146734 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 22:51:17.656337  146734 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 22:51:17.716062  146734 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-12-22 22:51:17.707507925 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1222 22:51:17.716919  146734 cni.go:84] Creating CNI manager for ""
	I1222 22:51:17.717012  146734 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1222 22:51:17.717085  146734 start.go:353] cluster config:
	{Name:functional-384766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-384766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 22:51:17.719515  146734 out.go:179] * Starting "functional-384766" primary control-plane node in "functional-384766" cluster
	I1222 22:51:17.720631  146734 cache.go:134] Beginning downloading kic base image for docker with docker
	I1222 22:51:17.721792  146734 out.go:179] * Pulling base image v0.0.48-1766394456-22288 ...
	I1222 22:51:17.723064  146734 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1222 22:51:17.723095  146734 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4
	I1222 22:51:17.723112  146734 cache.go:65] Caching tarball of preloaded images
	I1222 22:51:17.723172  146734 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local docker daemon
	I1222 22:51:17.723191  146734 preload.go:251] Found /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1222 22:51:17.723198  146734 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on docker
	I1222 22:51:17.723299  146734 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/config.json ...
	I1222 22:51:17.742349  146734 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local docker daemon, skipping pull
	I1222 22:51:17.742368  146734 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 exists in daemon, skipping load
	I1222 22:51:17.742396  146734 cache.go:243] Successfully downloaded all kic artifacts
	I1222 22:51:17.742444  146734 start.go:360] acquireMachinesLock for functional-384766: {Name:mk956fe60c71d3d96aa218ecf73d6e39f6ab1bf3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 22:51:17.742506  146734 start.go:364] duration metric: took 41.881µs to acquireMachinesLock for "functional-384766"
	I1222 22:51:17.742535  146734 start.go:96] Skipping create...Using existing machine configuration
	I1222 22:51:17.742545  146734 fix.go:54] fixHost starting: 
	I1222 22:51:17.742810  146734 cli_runner.go:164] Run: docker container inspect functional-384766 --format={{.State.Status}}
	I1222 22:51:17.759507  146734 fix.go:112] recreateIfNeeded on functional-384766: state=Running err=<nil>
	W1222 22:51:17.759531  146734 fix.go:138] unexpected machine state, will restart: <nil>
	I1222 22:51:17.761090  146734 out.go:252] * Updating the running docker "functional-384766" container ...
	I1222 22:51:17.761123  146734 machine.go:94] provisionDockerMachine start ...
	I1222 22:51:17.761180  146734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:51:17.778682  146734 main.go:144] libmachine: Using SSH client type: native
	I1222 22:51:17.778900  146734 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:51:17.778912  146734 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 22:51:17.919326  146734 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-384766
	
	I1222 22:51:17.919369  146734 ubuntu.go:182] provisioning hostname "functional-384766"
	I1222 22:51:17.919431  146734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:51:17.936992  146734 main.go:144] libmachine: Using SSH client type: native
	I1222 22:51:17.937221  146734 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:51:17.937234  146734 main.go:144] libmachine: About to run SSH command:
	sudo hostname functional-384766 && echo "functional-384766" | sudo tee /etc/hostname
	I1222 22:51:18.086470  146734 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-384766
	
	I1222 22:51:18.086564  146734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:51:18.104748  146734 main.go:144] libmachine: Using SSH client type: native
	I1222 22:51:18.105051  146734 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:51:18.105077  146734 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-384766' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-384766/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-384766' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 22:51:18.246730  146734 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 22:51:18.246760  146734 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22301-72233/.minikube CaCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22301-72233/.minikube}
	I1222 22:51:18.246782  146734 ubuntu.go:190] setting up certificates
	I1222 22:51:18.246792  146734 provision.go:84] configureAuth start
	I1222 22:51:18.246854  146734 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-384766
	I1222 22:51:18.265782  146734 provision.go:143] copyHostCerts
	I1222 22:51:18.265828  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem
	I1222 22:51:18.265879  146734 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem, removing ...
	I1222 22:51:18.265900  146734 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem
	I1222 22:51:18.266005  146734 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem (1082 bytes)
	I1222 22:51:18.266139  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem
	I1222 22:51:18.266163  146734 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem, removing ...
	I1222 22:51:18.266175  146734 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem
	I1222 22:51:18.266220  146734 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem (1123 bytes)
	I1222 22:51:18.266317  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem
	I1222 22:51:18.266344  146734 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem, removing ...
	I1222 22:51:18.266355  146734 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem
	I1222 22:51:18.266400  146734 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem (1679 bytes)
	I1222 22:51:18.266499  146734 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem org=jenkins.functional-384766 san=[127.0.0.1 192.168.49.2 functional-384766 localhost minikube]
	I1222 22:51:18.330118  146734 provision.go:177] copyRemoteCerts
	I1222 22:51:18.330177  146734 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 22:51:18.330210  146734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:51:18.347420  146734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
	I1222 22:51:18.447556  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1222 22:51:18.447646  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 22:51:18.464129  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1222 22:51:18.464180  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 22:51:18.480702  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1222 22:51:18.480757  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1222 22:51:18.496998  146734 provision.go:87] duration metric: took 250.195084ms to configureAuth
	I1222 22:51:18.497021  146734 ubuntu.go:206] setting minikube options for container-runtime
	I1222 22:51:18.497168  146734 config.go:182] Loaded profile config "functional-384766": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1222 22:51:18.497218  146734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:51:18.514380  146734 main.go:144] libmachine: Using SSH client type: native
	I1222 22:51:18.514623  146734 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:51:18.514636  146734 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1222 22:51:18.655354  146734 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1222 22:51:18.655383  146734 ubuntu.go:71] root file system type: overlay
	I1222 22:51:18.655533  146734 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1222 22:51:18.655634  146734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:51:18.673540  146734 main.go:144] libmachine: Using SSH client type: native
	I1222 22:51:18.673819  146734 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:51:18.673915  146734 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1222 22:51:18.823487  146734 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1222 22:51:18.823601  146734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:51:18.841347  146734 main.go:144] libmachine: Using SSH client type: native
	I1222 22:51:18.841608  146734 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:51:18.841639  146734 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1222 22:51:18.987007  146734 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 22:51:18.987042  146734 machine.go:97] duration metric: took 1.225905804s to provisionDockerMachine
	I1222 22:51:18.987059  146734 start.go:293] postStartSetup for "functional-384766" (driver="docker")
	I1222 22:51:18.987075  146734 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 22:51:18.987145  146734 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 22:51:18.987199  146734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:51:19.006696  146734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
	I1222 22:51:19.107530  146734 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 22:51:19.110931  146734 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1222 22:51:19.110952  146734 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1222 22:51:19.110959  146734 command_runner.go:130] > VERSION_ID="12"
	I1222 22:51:19.110964  146734 command_runner.go:130] > VERSION="12 (bookworm)"
	I1222 22:51:19.110979  146734 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1222 22:51:19.110985  146734 command_runner.go:130] > ID=debian
	I1222 22:51:19.110992  146734 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1222 22:51:19.111000  146734 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1222 22:51:19.111012  146734 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1222 22:51:19.111100  146734 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 22:51:19.111124  146734 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 22:51:19.111137  146734 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-72233/.minikube/addons for local assets ...
	I1222 22:51:19.111205  146734 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-72233/.minikube/files for local assets ...
	I1222 22:51:19.111317  146734 filesync.go:149] local asset: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem -> 758032.pem in /etc/ssl/certs
	I1222 22:51:19.111330  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem -> /etc/ssl/certs/758032.pem
	I1222 22:51:19.111426  146734 filesync.go:149] local asset: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/test/nested/copy/75803/hosts -> hosts in /etc/test/nested/copy/75803
	I1222 22:51:19.111434  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/test/nested/copy/75803/hosts -> /etc/test/nested/copy/75803/hosts
	I1222 22:51:19.111495  146734 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/75803
	I1222 22:51:19.119122  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem --> /etc/ssl/certs/758032.pem (1708 bytes)
	I1222 22:51:19.135900  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/test/nested/copy/75803/hosts --> /etc/test/nested/copy/75803/hosts (40 bytes)
	I1222 22:51:19.152438  146734 start.go:296] duration metric: took 165.360222ms for postStartSetup
	I1222 22:51:19.152512  146734 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 22:51:19.152568  146734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:51:19.170181  146734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
	I1222 22:51:19.267525  146734 command_runner.go:130] > 37%
	I1222 22:51:19.267628  146734 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 22:51:19.272133  146734 command_runner.go:130] > 185G
	I1222 22:51:19.272164  146734 fix.go:56] duration metric: took 1.529618595s for fixHost
	I1222 22:51:19.272178  146734 start.go:83] releasing machines lock for "functional-384766", held for 1.529658247s
	I1222 22:51:19.272243  146734 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-384766
	I1222 22:51:19.290506  146734 ssh_runner.go:195] Run: cat /version.json
	I1222 22:51:19.290562  146734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:51:19.290583  146734 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 22:51:19.290685  146734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:51:19.307884  146734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
	I1222 22:51:19.308688  146734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
	I1222 22:51:19.461522  146734 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1222 22:51:19.463216  146734 command_runner.go:130] > {"iso_version": "v1.37.0-1766254259-22261", "kicbase_version": "v0.0.48-1766394456-22288", "minikube_version": "v1.37.0", "commit": "069cfc84263169a672fdad8d37486b5cb35673ac"}
	I1222 22:51:19.463366  146734 ssh_runner.go:195] Run: systemctl --version
	I1222 22:51:19.469697  146734 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1222 22:51:19.469761  146734 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1222 22:51:19.469847  146734 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1222 22:51:19.474292  146734 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1222 22:51:19.474367  146734 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 22:51:19.474416  146734 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 22:51:19.482031  146734 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1222 22:51:19.482056  146734 start.go:496] detecting cgroup driver to use...
	I1222 22:51:19.482091  146734 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 22:51:19.482215  146734 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 22:51:19.495227  146734 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1222 22:51:19.495298  146734 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1222 22:51:19.503438  146734 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1222 22:51:19.511525  146734 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1222 22:51:19.511574  146734 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1222 22:51:19.519676  146734 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 22:51:19.527517  146734 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1222 22:51:19.535615  146734 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 22:51:19.543569  146734 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 22:51:19.550965  146734 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1222 22:51:19.559037  146734 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1222 22:51:19.567079  146734 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1222 22:51:19.575222  146734 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 22:51:19.582102  146734 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1222 22:51:19.582154  146734 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 22:51:19.588882  146734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 22:51:19.668907  146734 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1222 22:51:19.740882  146734 start.go:496] detecting cgroup driver to use...
	I1222 22:51:19.740926  146734 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 22:51:19.740967  146734 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1222 22:51:19.753727  146734 command_runner.go:130] > # /lib/systemd/system/docker.service
	I1222 22:51:19.753762  146734 command_runner.go:130] > [Unit]
	I1222 22:51:19.753770  146734 command_runner.go:130] > Description=Docker Application Container Engine
	I1222 22:51:19.753778  146734 command_runner.go:130] > Documentation=https://docs.docker.com
	I1222 22:51:19.753787  146734 command_runner.go:130] > After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	I1222 22:51:19.753797  146734 command_runner.go:130] > Wants=network-online.target containerd.service
	I1222 22:51:19.753808  146734 command_runner.go:130] > Requires=docker.socket
	I1222 22:51:19.753815  146734 command_runner.go:130] > StartLimitBurst=3
	I1222 22:51:19.753825  146734 command_runner.go:130] > StartLimitIntervalSec=60
	I1222 22:51:19.753833  146734 command_runner.go:130] > [Service]
	I1222 22:51:19.753841  146734 command_runner.go:130] > Type=notify
	I1222 22:51:19.753848  146734 command_runner.go:130] > Restart=always
	I1222 22:51:19.753862  146734 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1222 22:51:19.753882  146734 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1222 22:51:19.753896  146734 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1222 22:51:19.753910  146734 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1222 22:51:19.753923  146734 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1222 22:51:19.753937  146734 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1222 22:51:19.753952  146734 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1222 22:51:19.753969  146734 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1222 22:51:19.753983  146734 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1222 22:51:19.753991  146734 command_runner.go:130] > ExecStart=
	I1222 22:51:19.754018  146734 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I1222 22:51:19.754031  146734 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1222 22:51:19.754046  146734 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1222 22:51:19.754060  146734 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1222 22:51:19.754067  146734 command_runner.go:130] > LimitNOFILE=infinity
	I1222 22:51:19.754076  146734 command_runner.go:130] > LimitNPROC=infinity
	I1222 22:51:19.754084  146734 command_runner.go:130] > LimitCORE=infinity
	I1222 22:51:19.754095  146734 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1222 22:51:19.754107  146734 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1222 22:51:19.754115  146734 command_runner.go:130] > TasksMax=infinity
	I1222 22:51:19.754124  146734 command_runner.go:130] > TimeoutStartSec=0
	I1222 22:51:19.754137  146734 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1222 22:51:19.754152  146734 command_runner.go:130] > Delegate=yes
	I1222 22:51:19.754162  146734 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1222 22:51:19.754171  146734 command_runner.go:130] > KillMode=process
	I1222 22:51:19.754179  146734 command_runner.go:130] > OOMScoreAdjust=-500
	I1222 22:51:19.754187  146734 command_runner.go:130] > [Install]
	I1222 22:51:19.754196  146734 command_runner.go:130] > WantedBy=multi-user.target
	I1222 22:51:19.754834  146734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 22:51:19.766639  146734 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1222 22:51:19.781290  146734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 22:51:19.792290  146734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1222 22:51:19.803490  146734 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 22:51:19.815697  146734 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1222 22:51:19.816642  146734 ssh_runner.go:195] Run: which cri-dockerd
	I1222 22:51:19.820210  146734 command_runner.go:130] > /usr/bin/cri-dockerd
	I1222 22:51:19.820315  146734 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1222 22:51:19.827693  146734 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1222 22:51:19.839649  146734 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1222 22:51:19.921176  146734 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1222 22:51:20.004043  146734 docker.go:578] configuring docker to use "cgroupfs" as cgroup driver...
	I1222 22:51:20.004160  146734 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1222 22:51:20.017007  146734 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1222 22:51:20.028524  146734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 22:51:20.107815  146734 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1222 22:51:20.801234  146734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 22:51:20.813428  146734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1222 22:51:20.824782  146734 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1222 22:51:20.839450  146734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1222 22:51:20.850829  146734 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1222 22:51:20.931099  146734 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1222 22:51:21.012149  146734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 22:51:21.092742  146734 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1222 22:51:21.120647  146734 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1222 22:51:21.132196  146734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 22:51:21.256485  146734 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1222 22:51:21.327564  146734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1222 22:51:21.340042  146734 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1222 22:51:21.340117  146734 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1222 22:51:21.343842  146734 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1222 22:51:21.343869  146734 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1222 22:51:21.343877  146734 command_runner.go:130] > Device: 0,75	Inode: 1744        Links: 1
	I1222 22:51:21.343888  146734 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  997/  docker)
	I1222 22:51:21.343895  146734 command_runner.go:130] > Access: 2025-12-22 22:51:21.266838495 +0000
	I1222 22:51:21.343909  146734 command_runner.go:130] > Modify: 2025-12-22 22:51:21.266838495 +0000
	I1222 22:51:21.343924  146734 command_runner.go:130] > Change: 2025-12-22 22:51:21.279839753 +0000
	I1222 22:51:21.343935  146734 command_runner.go:130] >  Birth: 2025-12-22 22:51:21.266838495 +0000
	I1222 22:51:21.343976  146734 start.go:564] Will wait 60s for crictl version
	I1222 22:51:21.344020  146734 ssh_runner.go:195] Run: which crictl
	I1222 22:51:21.347282  146734 command_runner.go:130] > /usr/local/bin/crictl
	I1222 22:51:21.347341  146734 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 22:51:21.370719  146734 command_runner.go:130] > Version:  0.1.0
	I1222 22:51:21.370739  146734 command_runner.go:130] > RuntimeName:  docker
	I1222 22:51:21.370743  146734 command_runner.go:130] > RuntimeVersion:  29.1.3
	I1222 22:51:21.370748  146734 command_runner.go:130] > RuntimeApiVersion:  v1
	I1222 22:51:21.370764  146734 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1222 22:51:21.370812  146734 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1222 22:51:21.395767  146734 command_runner.go:130] > 29.1.3
	I1222 22:51:21.395836  146734 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1222 22:51:21.418820  146734 command_runner.go:130] > 29.1.3
	I1222 22:51:21.422122  146734 out.go:252] * Preparing Kubernetes v1.35.0-rc.1 on Docker 29.1.3 ...
	I1222 22:51:21.422206  146734 cli_runner.go:164] Run: docker network inspect functional-384766 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 22:51:21.439338  146734 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1222 22:51:21.443526  146734 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1222 22:51:21.443628  146734 kubeadm.go:884] updating cluster {Name:functional-384766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-384766 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1222 22:51:21.443753  146734 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1222 22:51:21.443822  146734 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1222 22:51:21.464281  146734 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1222 22:51:21.464308  146734 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1222 22:51:21.464318  146734 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1222 22:51:21.464325  146734 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1222 22:51:21.464332  146734 command_runner.go:130] > registry.k8s.io/etcd:3.6.6-0
	I1222 22:51:21.464340  146734 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1222 22:51:21.464348  146734 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1222 22:51:21.464366  146734 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 22:51:21.464395  146734 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1222 22:51:21.464407  146734 docker.go:624] Images already preloaded, skipping extraction
	I1222 22:51:21.464455  146734 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1222 22:51:21.482666  146734 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1222 22:51:21.482684  146734 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1222 22:51:21.482690  146734 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1222 22:51:21.482697  146734 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1222 22:51:21.482704  146734 command_runner.go:130] > registry.k8s.io/etcd:3.6.6-0
	I1222 22:51:21.482712  146734 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1222 22:51:21.482729  146734 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1222 22:51:21.482739  146734 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 22:51:21.483998  146734 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1222 22:51:21.484022  146734 cache_images.go:86] Images are preloaded, skipping loading
	I1222 22:51:21.484036  146734 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 docker true true} ...
	I1222 22:51:21.484172  146734 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-384766 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-384766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 22:51:21.484238  146734 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1222 22:51:21.532066  146734 command_runner.go:130] > cgroupfs
	I1222 22:51:21.533783  146734 cni.go:84] Creating CNI manager for ""
	I1222 22:51:21.533808  146734 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1222 22:51:21.533825  146734 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 22:51:21.533845  146734 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-384766 NodeName:functional-384766 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 22:51:21.533961  146734 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-384766"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 22:51:21.534020  146734 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 22:51:21.542124  146734 command_runner.go:130] > kubeadm
	I1222 22:51:21.542141  146734 command_runner.go:130] > kubectl
	I1222 22:51:21.542144  146734 command_runner.go:130] > kubelet
	I1222 22:51:21.542165  146734 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 22:51:21.542214  146734 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 22:51:21.549624  146734 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1222 22:51:21.561393  146734 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 22:51:21.572932  146734 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I1222 22:51:21.584412  146734 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1222 22:51:21.587798  146734 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1222 22:51:21.587903  146734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 22:51:21.667778  146734 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 22:51:21.997732  146734 certs.go:69] Setting up /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766 for IP: 192.168.49.2
	I1222 22:51:21.997755  146734 certs.go:195] generating shared ca certs ...
	I1222 22:51:21.997774  146734 certs.go:227] acquiring lock for ca certs: {Name:mk952cc8302daab7c0050aedd5db4002f6808128 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 22:51:21.997942  146734 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.key
	I1222 22:51:21.998024  146734 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.key
	I1222 22:51:21.998042  146734 certs.go:257] generating profile certs ...
	I1222 22:51:21.998184  146734 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.key
	I1222 22:51:21.998247  146734 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/apiserver.key.c9e079a8
	I1222 22:51:21.998298  146734 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/proxy-client.key
	I1222 22:51:21.998317  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1222 22:51:21.998340  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1222 22:51:21.998365  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1222 22:51:21.998382  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1222 22:51:21.998399  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1222 22:51:21.998418  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1222 22:51:21.998436  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1222 22:51:21.998454  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1222 22:51:21.998527  146734 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803.pem (1338 bytes)
	W1222 22:51:21.998578  146734 certs.go:480] ignoring /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803_empty.pem, impossibly tiny 0 bytes
	I1222 22:51:21.998635  146734 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem (1675 bytes)
	I1222 22:51:21.998684  146734 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem (1082 bytes)
	I1222 22:51:21.998717  146734 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem (1123 bytes)
	I1222 22:51:21.998750  146734 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem (1679 bytes)
	I1222 22:51:21.998813  146734 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem (1708 bytes)
	I1222 22:51:21.998854  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem -> /usr/share/ca-certificates/758032.pem
	I1222 22:51:21.998877  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1222 22:51:21.998896  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803.pem -> /usr/share/ca-certificates/75803.pem
	I1222 22:51:21.999493  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 22:51:22.018141  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 22:51:22.036416  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 22:51:22.053080  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 22:51:22.069323  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 22:51:22.085369  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 22:51:22.101485  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 22:51:22.117634  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1222 22:51:22.133612  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem --> /usr/share/ca-certificates/758032.pem (1708 bytes)
	I1222 22:51:22.150125  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 22:51:22.166578  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803.pem --> /usr/share/ca-certificates/75803.pem (1338 bytes)
	I1222 22:51:22.182911  146734 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 22:51:22.194486  146734 ssh_runner.go:195] Run: openssl version
	I1222 22:51:22.199935  146734 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1222 22:51:22.200169  146734 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 22:51:22.206913  146734 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 22:51:22.213732  146734 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 22:51:22.217037  146734 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 22 22:33 /usr/share/ca-certificates/minikubeCA.pem
	I1222 22:51:22.217075  146734 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 22:33 /usr/share/ca-certificates/minikubeCA.pem
	I1222 22:51:22.217111  146734 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 22:51:22.249675  146734 command_runner.go:130] > b5213941
	I1222 22:51:22.250033  146734 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 22:51:22.257095  146734 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/75803.pem
	I1222 22:51:22.264071  146734 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/75803.pem /etc/ssl/certs/75803.pem
	I1222 22:51:22.271042  146734 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75803.pem
	I1222 22:51:22.274411  146734 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 22 22:42 /usr/share/ca-certificates/75803.pem
	I1222 22:51:22.274445  146734 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 22:42 /usr/share/ca-certificates/75803.pem
	I1222 22:51:22.274483  146734 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75803.pem
	I1222 22:51:22.307772  146734 command_runner.go:130] > 51391683
	I1222 22:51:22.308113  146734 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 22:51:22.315176  146734 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/758032.pem
	I1222 22:51:22.322196  146734 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/758032.pem /etc/ssl/certs/758032.pem
	I1222 22:51:22.329109  146734 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/758032.pem
	I1222 22:51:22.332667  146734 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 22 22:42 /usr/share/ca-certificates/758032.pem
	I1222 22:51:22.332691  146734 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 22:42 /usr/share/ca-certificates/758032.pem
	I1222 22:51:22.332732  146734 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/758032.pem
	I1222 22:51:22.365940  146734 command_runner.go:130] > 3ec20f2e
	I1222 22:51:22.366181  146734 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 22:51:22.373802  146734 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 22:51:22.377513  146734 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 22:51:22.377537  146734 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1222 22:51:22.377543  146734 command_runner.go:130] > Device: 8,1	Inode: 809094      Links: 1
	I1222 22:51:22.377550  146734 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1222 22:51:22.377558  146734 command_runner.go:130] > Access: 2025-12-22 22:47:15.370061162 +0000
	I1222 22:51:22.377566  146734 command_runner.go:130] > Modify: 2025-12-22 22:43:13.446668027 +0000
	I1222 22:51:22.377574  146734 command_runner.go:130] > Change: 2025-12-22 22:43:13.446668027 +0000
	I1222 22:51:22.377602  146734 command_runner.go:130] >  Birth: 2025-12-22 22:43:13.446668027 +0000
	I1222 22:51:22.377678  146734 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1222 22:51:22.411266  146734 command_runner.go:130] > Certificate will not expire
	I1222 22:51:22.411570  146734 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1222 22:51:22.445025  146734 command_runner.go:130] > Certificate will not expire
	I1222 22:51:22.445322  146734 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1222 22:51:22.479095  146734 command_runner.go:130] > Certificate will not expire
	I1222 22:51:22.479395  146734 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1222 22:51:22.512263  146734 command_runner.go:130] > Certificate will not expire
	I1222 22:51:22.512537  146734 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1222 22:51:22.545264  146734 command_runner.go:130] > Certificate will not expire
	I1222 22:51:22.545554  146734 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1222 22:51:22.578867  146734 command_runner.go:130] > Certificate will not expire
	I1222 22:51:22.579164  146734 kubeadm.go:401] StartCluster: {Name:functional-384766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-384766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 22:51:22.579364  146734 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1222 22:51:22.598061  146734 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 22:51:22.605833  146734 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1222 22:51:22.605851  146734 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1222 22:51:22.605860  146734 command_runner.go:130] > /var/lib/minikube/etcd:
	I1222 22:51:22.605880  146734 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1222 22:51:22.605891  146734 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1222 22:51:22.605932  146734 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1222 22:51:22.613011  146734 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1222 22:51:22.613379  146734 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-384766" does not appear in /home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1222 22:51:22.613493  146734 kubeconfig.go:62] /home/jenkins/minikube-integration/22301-72233/kubeconfig needs updating (will repair): [kubeconfig missing "functional-384766" cluster setting kubeconfig missing "functional-384766" context setting]
	I1222 22:51:22.613840  146734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/kubeconfig: {Name:mkabb5ea92c3fe748f610038efb5c58128364c71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 22:51:22.614238  146734 loader.go:405] Config loaded from file:  /home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1222 22:51:22.614401  146734 kapi.go:59] client config for functional-384766: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.crt", KeyFile:"/home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.key", CAFile:"/home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2765fe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1222 22:51:22.614887  146734 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1222 22:51:22.614906  146734 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1222 22:51:22.614915  146734 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=true
	I1222 22:51:22.614921  146734 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=true
	I1222 22:51:22.614926  146734 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=true
	I1222 22:51:22.614933  146734 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1222 22:51:22.614941  146734 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1222 22:51:22.615340  146734 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1222 22:51:22.622321  146734 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1222 22:51:22.622350  146734 kubeadm.go:602] duration metric: took 16.45181ms to restartPrimaryControlPlane
	I1222 22:51:22.622360  146734 kubeadm.go:403] duration metric: took 43.204719ms to StartCluster
	I1222 22:51:22.622376  146734 settings.go:142] acquiring lock: {Name:mk05aa406dacdbba79fec0b7e7f355491ea46bf8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 22:51:22.622430  146734 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1222 22:51:22.622875  146734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/kubeconfig: {Name:mkabb5ea92c3fe748f610038efb5c58128364c71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 22:51:22.623066  146734 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1222 22:51:22.623138  146734 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1222 22:51:22.623233  146734 addons.go:70] Setting storage-provisioner=true in profile "functional-384766"
	I1222 22:51:22.623261  146734 addons.go:239] Setting addon storage-provisioner=true in "functional-384766"
	I1222 22:51:22.623284  146734 config.go:182] Loaded profile config "functional-384766": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1222 22:51:22.623296  146734 host.go:66] Checking if "functional-384766" exists ...
	I1222 22:51:22.623288  146734 addons.go:70] Setting default-storageclass=true in profile "functional-384766"
	I1222 22:51:22.623322  146734 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-384766"
	I1222 22:51:22.623660  146734 cli_runner.go:164] Run: docker container inspect functional-384766 --format={{.State.Status}}
	I1222 22:51:22.623809  146734 cli_runner.go:164] Run: docker container inspect functional-384766 --format={{.State.Status}}
	I1222 22:51:22.624438  146734 out.go:179] * Verifying Kubernetes components...
	I1222 22:51:22.625531  146734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 22:51:22.644170  146734 loader.go:405] Config loaded from file:  /home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1222 22:51:22.644380  146734 kapi.go:59] client config for functional-384766: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.crt", KeyFile:"/home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.key", CAFile:"/home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2765fe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1222 22:51:22.644456  146734 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 22:51:22.644766  146734 addons.go:239] Setting addon default-storageclass=true in "functional-384766"
	I1222 22:51:22.644810  146734 host.go:66] Checking if "functional-384766" exists ...
	I1222 22:51:22.645336  146734 cli_runner.go:164] Run: docker container inspect functional-384766 --format={{.State.Status}}
	I1222 22:51:22.645513  146734 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:51:22.645531  146734 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1222 22:51:22.645584  146734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:51:22.667387  146734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
	I1222 22:51:22.668028  146734 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1222 22:51:22.668061  146734 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1222 22:51:22.668129  146734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:51:22.686127  146734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
	I1222 22:51:22.735817  146734 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 22:51:22.749391  146734 node_ready.go:35] waiting up to 6m0s for node "functional-384766" to be "Ready" ...
	I1222 22:51:22.749553  146734 type.go:165] "Request Body" body=""
	I1222 22:51:22.749681  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:22.749924  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:22.791529  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:51:22.791702  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:51:22.858228  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:22.858293  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:22.858334  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:22.858349  146734 retry.go:84] will retry after 300ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:22.860247  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:23.114793  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:51:23.124266  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:51:23.170075  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:23.170134  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:23.179073  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:23.179145  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:23.250305  146734 type.go:165] "Request Body" body=""
	I1222 22:51:23.250418  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:23.250774  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:23.384101  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:51:23.434813  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:23.434866  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:23.600155  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:51:23.651352  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:23.651412  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:23.749655  146734 type.go:165] "Request Body" body=""
	I1222 22:51:23.749735  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:23.750072  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:23.901355  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:51:23.952200  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:23.952267  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:24.239666  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:51:24.250121  146734 type.go:165] "Request Body" body=""
	I1222 22:51:24.250189  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:24.250430  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:24.294448  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:24.294492  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:24.750059  146734 type.go:165] "Request Body" body=""
	I1222 22:51:24.750149  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:24.750512  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:24.750582  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:24.937883  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:51:24.989534  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:24.989576  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:25.250004  146734 type.go:165] "Request Body" body=""
	I1222 22:51:25.250083  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:25.250431  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:25.372773  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:51:25.425171  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:25.425216  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:25.749629  146734 type.go:165] "Request Body" body=""
	I1222 22:51:25.749702  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:25.750010  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:26.170572  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:51:26.222069  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:26.222131  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:26.250327  146734 type.go:165] "Request Body" body=""
	I1222 22:51:26.250414  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:26.250759  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:26.440137  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:51:26.491948  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:26.492006  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:26.750438  146734 type.go:165] "Request Body" body=""
	I1222 22:51:26.750538  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:26.750885  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:26.750943  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:27.250541  146734 type.go:165] "Request Body" body=""
	I1222 22:51:27.250646  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:27.250950  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:27.355175  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:51:27.403566  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:27.406149  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:27.749989  146734 type.go:165] "Request Body" body=""
	I1222 22:51:27.750066  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:27.750396  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:28.250002  146734 type.go:165] "Request Body" body=""
	I1222 22:51:28.250075  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:28.250397  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:28.438810  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:51:28.487114  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:28.489616  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:28.750061  146734 type.go:165] "Request Body" body=""
	I1222 22:51:28.750134  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:28.750419  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:29.250032  146734 type.go:165] "Request Body" body=""
	I1222 22:51:29.250106  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:29.250445  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:29.250522  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:29.750041  146734 type.go:165] "Request Body" body=""
	I1222 22:51:29.750138  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:29.750509  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:30.249736  146734 type.go:165] "Request Body" body=""
	I1222 22:51:30.249807  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:30.250111  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:30.636760  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:51:30.689934  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:30.689988  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:30.750216  146734 type.go:165] "Request Body" body=""
	I1222 22:51:30.750316  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:30.750667  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:31.250328  146734 type.go:165] "Request Body" body=""
	I1222 22:51:31.250434  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:31.250799  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:31.250876  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:31.750450  146734 type.go:165] "Request Body" body=""
	I1222 22:51:31.750530  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:31.750869  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:32.250483  146734 type.go:165] "Request Body" body=""
	I1222 22:51:32.250580  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:32.250950  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:32.711876  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:51:32.750368  146734 type.go:165] "Request Body" body=""
	I1222 22:51:32.750445  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:32.750774  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:32.760899  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:32.763771  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:33.250469  146734 type.go:165] "Request Body" body=""
	I1222 22:51:33.250543  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:33.250856  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:33.250917  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:33.406152  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:51:33.457687  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:33.457745  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:33.750192  146734 type.go:165] "Request Body" body=""
	I1222 22:51:33.750291  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:33.750643  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:34.250274  146734 type.go:165] "Request Body" body=""
	I1222 22:51:34.250352  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:34.250676  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:34.749812  146734 type.go:165] "Request Body" body=""
	I1222 22:51:34.749877  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:34.750166  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:35.249755  146734 type.go:165] "Request Body" body=""
	I1222 22:51:35.249850  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:35.250178  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:35.516575  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:51:35.570400  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:35.570450  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:35.749757  146734 type.go:165] "Request Body" body=""
	I1222 22:51:35.749831  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:35.750172  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:35.750238  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:36.249789  146734 type.go:165] "Request Body" body=""
	I1222 22:51:36.249888  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:36.250250  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:36.749817  146734 type.go:165] "Request Body" body=""
	I1222 22:51:36.749889  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:36.750217  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:37.249837  146734 type.go:165] "Request Body" body=""
	I1222 22:51:37.249921  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:37.250262  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:37.750101  146734 type.go:165] "Request Body" body=""
	I1222 22:51:37.750202  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:37.750527  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:37.750609  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:38.250259  146734 type.go:165] "Request Body" body=""
	I1222 22:51:38.250333  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:38.250692  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:38.358924  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:51:38.409955  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:38.410034  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:38.750557  146734 type.go:165] "Request Body" body=""
	I1222 22:51:38.750654  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:38.750998  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:39.249528  146734 type.go:165] "Request Body" body=""
	I1222 22:51:39.249647  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:39.249920  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:39.749563  146734 type.go:165] "Request Body" body=""
	I1222 22:51:39.749697  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:39.750029  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:40.249635  146734 type.go:165] "Request Body" body=""
	I1222 22:51:40.249710  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:40.250037  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:40.250107  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:40.749663  146734 type.go:165] "Request Body" body=""
	I1222 22:51:40.749734  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:40.750058  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:41.249687  146734 type.go:165] "Request Body" body=""
	I1222 22:51:41.249766  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:41.250113  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:41.749718  146734 type.go:165] "Request Body" body=""
	I1222 22:51:41.749834  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:41.750194  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:42.249756  146734 type.go:165] "Request Body" body=""
	I1222 22:51:42.249861  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:42.250194  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:42.250268  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:42.750151  146734 type.go:165] "Request Body" body=""
	I1222 22:51:42.750272  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:42.750674  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:43.250325  146734 type.go:165] "Request Body" body=""
	I1222 22:51:43.250412  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:43.250779  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:43.750422  146734 type.go:165] "Request Body" body=""
	I1222 22:51:43.750532  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:43.750837  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:44.250504  146734 type.go:165] "Request Body" body=""
	I1222 22:51:44.250574  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:44.250924  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:44.250995  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:44.670446  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:51:44.719419  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:44.722302  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:44.722343  146734 retry.go:84] will retry after 11.9s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:44.750527  146734 type.go:165] "Request Body" body=""
	I1222 22:51:44.750632  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:44.750954  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:45.250633  146734 type.go:165] "Request Body" body=""
	I1222 22:51:45.250718  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:45.251044  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:45.749665  146734 type.go:165] "Request Body" body=""
	I1222 22:51:45.749749  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:45.750104  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:46.249725  146734 type.go:165] "Request Body" body=""
	I1222 22:51:46.249806  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:46.250136  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:46.749687  146734 type.go:165] "Request Body" body=""
	I1222 22:51:46.749756  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:46.750050  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:46.750108  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:47.249647  146734 type.go:165] "Request Body" body=""
	I1222 22:51:47.249753  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:47.250081  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:47.750272  146734 type.go:165] "Request Body" body=""
	I1222 22:51:47.750344  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:47.750626  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:48.250351  146734 type.go:165] "Request Body" body=""
	I1222 22:51:48.250445  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:48.250816  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:48.750455  146734 type.go:165] "Request Body" body=""
	I1222 22:51:48.750540  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:48.750902  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:48.750964  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:49.250555  146734 type.go:165] "Request Body" body=""
	I1222 22:51:49.250653  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:49.250985  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:49.750603  146734 type.go:165] "Request Body" body=""
	I1222 22:51:49.750681  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:49.750999  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:50.249551  146734 type.go:165] "Request Body" body=""
	I1222 22:51:50.249641  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:50.249968  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:50.750576  146734 type.go:165] "Request Body" body=""
	I1222 22:51:50.750686  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:50.751008  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:50.751079  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:51.249557  146734 type.go:165] "Request Body" body=""
	I1222 22:51:51.249656  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:51.249983  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:51.749684  146734 type.go:165] "Request Body" body=""
	I1222 22:51:51.749763  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:51.750094  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:52.249698  146734 type.go:165] "Request Body" body=""
	I1222 22:51:52.249792  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:52.250118  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:52.572783  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:51:52.624461  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:52.624509  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:52.624539  146734 retry.go:84] will retry after 8.9s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:52.749682  146734 type.go:165] "Request Body" body=""
	I1222 22:51:52.749751  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:52.750065  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:53.249705  146734 type.go:165] "Request Body" body=""
	I1222 22:51:53.249792  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:53.250136  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:53.250202  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:53.749743  146734 type.go:165] "Request Body" body=""
	I1222 22:51:53.749827  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:53.750165  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:54.249818  146734 type.go:165] "Request Body" body=""
	I1222 22:51:54.249922  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:54.250320  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:54.749879  146734 type.go:165] "Request Body" body=""
	I1222 22:51:54.749983  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:54.750325  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:55.249732  146734 type.go:165] "Request Body" body=""
	I1222 22:51:55.249811  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:55.250186  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:55.250256  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:55.749720  146734 type.go:165] "Request Body" body=""
	I1222 22:51:55.749797  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:55.750101  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:56.249700  146734 type.go:165] "Request Body" body=""
	I1222 22:51:56.249777  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:56.250119  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:56.630698  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:51:56.682682  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:56.682728  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:56.682750  146734 retry.go:84] will retry after 19.1s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:56.749962  146734 type.go:165] "Request Body" body=""
	I1222 22:51:56.750043  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:56.750390  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:57.249782  146734 type.go:165] "Request Body" body=""
	I1222 22:51:57.249859  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:57.250169  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:57.750032  146734 type.go:165] "Request Body" body=""
	I1222 22:51:57.750112  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:57.750459  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:57.750526  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:58.250052  146734 type.go:165] "Request Body" body=""
	I1222 22:51:58.250129  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:58.250484  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:58.750074  146734 type.go:165] "Request Body" body=""
	I1222 22:51:58.750164  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:58.750559  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:59.250376  146734 type.go:165] "Request Body" body=""
	I1222 22:51:59.250455  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:59.250821  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:59.750547  146734 type.go:165] "Request Body" body=""
	I1222 22:51:59.750668  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:59.751053  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:59.751124  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:00.249679  146734 type.go:165] "Request Body" body=""
	I1222 22:52:00.249756  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:00.250124  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:00.749766  146734 type.go:165] "Request Body" body=""
	I1222 22:52:00.749870  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:00.750200  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:01.249805  146734 type.go:165] "Request Body" body=""
	I1222 22:52:01.249878  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:01.250214  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:01.555677  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:52:01.608817  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:52:01.608873  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:52:01.608898  146734 retry.go:84] will retry after 11.1s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:52:01.750139  146734 type.go:165] "Request Body" body=""
	I1222 22:52:01.750232  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:01.750541  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:02.250350  146734 type.go:165] "Request Body" body=""
	I1222 22:52:02.250446  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:02.250884  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:02.250959  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:02.749991  146734 type.go:165] "Request Body" body=""
	I1222 22:52:02.750087  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:02.750489  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:03.249785  146734 type.go:165] "Request Body" body=""
	I1222 22:52:03.249886  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:03.250222  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:03.749863  146734 type.go:165] "Request Body" body=""
	I1222 22:52:03.749953  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:03.750330  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:04.249910  146734 type.go:165] "Request Body" body=""
	I1222 22:52:04.249991  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:04.250323  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:04.749787  146734 type.go:165] "Request Body" body=""
	I1222 22:52:04.749878  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:04.750255  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:04.750328  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:05.249805  146734 type.go:165] "Request Body" body=""
	I1222 22:52:05.249881  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:05.250215  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:05.749829  146734 type.go:165] "Request Body" body=""
	I1222 22:52:05.749905  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:05.750236  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:06.249768  146734 type.go:165] "Request Body" body=""
	I1222 22:52:06.249853  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:06.250166  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:06.749813  146734 type.go:165] "Request Body" body=""
	I1222 22:52:06.749962  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:06.750292  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:06.750353  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:07.249913  146734 type.go:165] "Request Body" body=""
	I1222 22:52:07.249997  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:07.250350  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:07.750157  146734 type.go:165] "Request Body" body=""
	I1222 22:52:07.750249  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:07.750625  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:08.250269  146734 type.go:165] "Request Body" body=""
	I1222 22:52:08.250349  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:08.250699  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:08.750338  146734 type.go:165] "Request Body" body=""
	I1222 22:52:08.750417  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:08.750817  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:08.750880  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:09.250447  146734 type.go:165] "Request Body" body=""
	I1222 22:52:09.250517  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:09.250886  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:09.750542  146734 type.go:165] "Request Body" body=""
	I1222 22:52:09.750651  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:09.751017  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:10.249569  146734 type.go:165] "Request Body" body=""
	I1222 22:52:10.249667  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:10.250007  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:10.749614  146734 type.go:165] "Request Body" body=""
	I1222 22:52:10.749698  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:10.749986  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:11.249644  146734 type.go:165] "Request Body" body=""
	I1222 22:52:11.249721  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:11.250050  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:11.250115  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:11.749702  146734 type.go:165] "Request Body" body=""
	I1222 22:52:11.749781  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:11.750159  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:12.250570  146734 type.go:165] "Request Body" body=""
	I1222 22:52:12.250676  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:12.251019  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:12.749204  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:52:12.749953  146734 type.go:165] "Request Body" body=""
	I1222 22:52:12.750037  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:12.750364  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:12.803295  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:52:12.803361  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:52:12.803388  146734 retry.go:84] will retry after 41s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:52:13.249864  146734 type.go:165] "Request Body" body=""
	I1222 22:52:13.249961  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:13.250341  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:13.250413  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:13.749947  146734 type.go:165] "Request Body" body=""
	I1222 22:52:13.750050  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:13.750385  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:14.249969  146734 type.go:165] "Request Body" body=""
	I1222 22:52:14.250047  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:14.250429  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:14.750035  146734 type.go:165] "Request Body" body=""
	I1222 22:52:14.750108  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:14.750442  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:15.249717  146734 type.go:165] "Request Body" body=""
	I1222 22:52:15.249827  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:15.250182  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:15.749736  146734 type.go:165] "Request Body" body=""
	I1222 22:52:15.749817  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:15.750176  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:15.750272  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:15.781356  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:52:15.834579  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:52:15.834644  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:52:15.834678  146734 retry.go:84] will retry after 22s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:52:16.250185  146734 type.go:165] "Request Body" body=""
	I1222 22:52:16.250285  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:16.250641  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:16.750300  146734 type.go:165] "Request Body" body=""
	I1222 22:52:16.750391  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:16.750749  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:17.250375  146734 type.go:165] "Request Body" body=""
	I1222 22:52:17.250470  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:17.250796  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:17.750663  146734 type.go:165] "Request Body" body=""
	I1222 22:52:17.750770  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:17.751155  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:17.751219  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:18.249705  146734 type.go:165] "Request Body" body=""
	I1222 22:52:18.249774  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:18.250099  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:18.749719  146734 type.go:165] "Request Body" body=""
	I1222 22:52:18.749823  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:18.750127  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:19.249772  146734 type.go:165] "Request Body" body=""
	I1222 22:52:19.249846  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:19.250172  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:19.749726  146734 type.go:165] "Request Body" body=""
	I1222 22:52:19.749805  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:19.750128  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:20.249691  146734 type.go:165] "Request Body" body=""
	I1222 22:52:20.249767  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:20.250123  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:20.250193  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:20.749739  146734 type.go:165] "Request Body" body=""
	I1222 22:52:20.749820  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:20.750153  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:21.249730  146734 type.go:165] "Request Body" body=""
	I1222 22:52:21.249821  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:21.250178  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:21.749804  146734 type.go:165] "Request Body" body=""
	I1222 22:52:21.749892  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:21.750232  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:22.249806  146734 type.go:165] "Request Body" body=""
	I1222 22:52:22.249886  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:22.250237  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:22.250298  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:22.750142  146734 type.go:165] "Request Body" body=""
	I1222 22:52:22.750215  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:22.750516  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:23.250222  146734 type.go:165] "Request Body" body=""
	I1222 22:52:23.250332  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:23.250708  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:23.750438  146734 type.go:165] "Request Body" body=""
	I1222 22:52:23.750532  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:23.750972  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:24.249568  146734 type.go:165] "Request Body" body=""
	I1222 22:52:24.249660  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:24.249969  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:24.749566  146734 type.go:165] "Request Body" body=""
	I1222 22:52:24.749670  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:24.750007  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:24.750078  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:25.249560  146734 type.go:165] "Request Body" body=""
	I1222 22:52:25.249668  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:25.250009  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:25.749615  146734 type.go:165] "Request Body" body=""
	I1222 22:52:25.749711  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:25.750086  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:26.249720  146734 type.go:165] "Request Body" body=""
	I1222 22:52:26.249804  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:26.250176  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:26.749791  146734 type.go:165] "Request Body" body=""
	I1222 22:52:26.749896  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:26.750250  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:26.750329  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:27.249744  146734 type.go:165] "Request Body" body=""
	I1222 22:52:27.249822  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:27.250119  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:27.749959  146734 type.go:165] "Request Body" body=""
	I1222 22:52:27.750049  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:27.750408  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:28.249981  146734 type.go:165] "Request Body" body=""
	I1222 22:52:28.250077  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:28.250414  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:28.749755  146734 type.go:165] "Request Body" body=""
	I1222 22:52:28.749827  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:28.750148  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:29.249816  146734 type.go:165] "Request Body" body=""
	I1222 22:52:29.249914  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:29.250248  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:29.250329  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:29.749820  146734 type.go:165] "Request Body" body=""
	I1222 22:52:29.749892  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:29.750206  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:30.249734  146734 type.go:165] "Request Body" body=""
	I1222 22:52:30.249842  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:30.250163  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:30.749684  146734 type.go:165] "Request Body" body=""
	I1222 22:52:30.749763  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:30.750086  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:31.249714  146734 type.go:165] "Request Body" body=""
	I1222 22:52:31.249787  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:31.250100  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:31.749725  146734 type.go:165] "Request Body" body=""
	I1222 22:52:31.749812  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:31.750152  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:31.750223  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:32.249759  146734 type.go:165] "Request Body" body=""
	I1222 22:52:32.249872  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:32.250213  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:32.750284  146734 type.go:165] "Request Body" body=""
	I1222 22:52:32.750382  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:32.750808  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:33.250484  146734 type.go:165] "Request Body" body=""
	I1222 22:52:33.250553  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:33.250856  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:33.750582  146734 type.go:165] "Request Body" body=""
	I1222 22:52:33.750682  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:33.751084  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:33.751151  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:34.249678  146734 type.go:165] "Request Body" body=""
	I1222 22:52:34.249771  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:34.250118  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:34.749750  146734 type.go:165] "Request Body" body=""
	I1222 22:52:34.749834  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:34.750178  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:35.249794  146734 type.go:165] "Request Body" body=""
	I1222 22:52:35.249866  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:35.250195  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:35.749858  146734 type.go:165] "Request Body" body=""
	I1222 22:52:35.749938  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:35.750250  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:36.249945  146734 type.go:165] "Request Body" body=""
	I1222 22:52:36.250030  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:36.250372  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:36.250452  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:36.750049  146734 type.go:165] "Request Body" body=""
	I1222 22:52:36.750122  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:36.750536  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:37.250238  146734 type.go:165] "Request Body" body=""
	I1222 22:52:37.250338  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:37.250710  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:37.750607  146734 type.go:165] "Request Body" body=""
	I1222 22:52:37.750693  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:37.751037  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:37.791261  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:52:37.841791  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:52:37.841848  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:52:37.841882  146734 retry.go:84] will retry after 24.9s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:52:38.250412  146734 type.go:165] "Request Body" body=""
	I1222 22:52:38.250501  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:38.250856  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:38.250927  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:38.750539  146734 type.go:165] "Request Body" body=""
	I1222 22:52:38.750640  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:38.750989  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:39.250534  146734 type.go:165] "Request Body" body=""
	I1222 22:52:39.250619  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:39.250903  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:39.750622  146734 type.go:165] "Request Body" body=""
	I1222 22:52:39.750769  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:39.751121  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:40.249717  146734 type.go:165] "Request Body" body=""
	I1222 22:52:40.249817  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:40.250172  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:40.749725  146734 type.go:165] "Request Body" body=""
	I1222 22:52:40.749800  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:40.750112  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:40.750183  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:41.249733  146734 type.go:165] "Request Body" body=""
	I1222 22:52:41.249816  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:41.250159  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:41.749758  146734 type.go:165] "Request Body" body=""
	I1222 22:52:41.749830  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:41.750172  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:42.249740  146734 type.go:165] "Request Body" body=""
	I1222 22:52:42.249820  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:42.250217  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:42.750262  146734 type.go:165] "Request Body" body=""
	I1222 22:52:42.750373  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:42.750720  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:42.750785  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:43.250407  146734 type.go:165] "Request Body" body=""
	I1222 22:52:43.250502  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:43.250877  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:43.750507  146734 type.go:165] "Request Body" body=""
	I1222 22:52:43.750607  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:43.750955  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:44.250608  146734 type.go:165] "Request Body" body=""
	I1222 22:52:44.250729  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:44.251071  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:44.749661  146734 type.go:165] "Request Body" body=""
	I1222 22:52:44.749764  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:44.750070  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:45.249668  146734 type.go:165] "Request Body" body=""
	I1222 22:52:45.249750  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:45.250041  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:45.250109  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:45.749753  146734 type.go:165] "Request Body" body=""
	I1222 22:52:45.749827  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:45.750181  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:46.249757  146734 type.go:165] "Request Body" body=""
	I1222 22:52:46.249830  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:46.250177  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:46.749749  146734 type.go:165] "Request Body" body=""
	I1222 22:52:46.749824  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:46.750208  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:47.249770  146734 type.go:165] "Request Body" body=""
	I1222 22:52:47.249839  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:47.250180  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:47.250253  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:47.750010  146734 type.go:165] "Request Body" body=""
	I1222 22:52:47.750110  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:47.750420  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:48.249698  146734 type.go:165] "Request Body" body=""
	I1222 22:52:48.249771  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:48.250079  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:48.750048  146734 type.go:165] "Request Body" body=""
	I1222 22:52:48.750143  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:48.750506  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:49.250090  146734 type.go:165] "Request Body" body=""
	I1222 22:52:49.250180  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:49.250514  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:49.250621  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:49.750101  146734 type.go:165] "Request Body" body=""
	I1222 22:52:49.750203  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:49.750541  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:50.250324  146734 type.go:165] "Request Body" body=""
	I1222 22:52:50.250405  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:50.250760  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:50.750452  146734 type.go:165] "Request Body" body=""
	I1222 22:52:50.750541  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:50.750912  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:51.249606  146734 type.go:165] "Request Body" body=""
	I1222 22:52:51.249697  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:51.250034  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:51.749707  146734 type.go:165] "Request Body" body=""
	I1222 22:52:51.749808  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:51.750156  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:51.750217  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:52.249714  146734 type.go:165] "Request Body" body=""
	I1222 22:52:52.249790  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:52.250101  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:52.750121  146734 type.go:165] "Request Body" body=""
	I1222 22:52:52.750209  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:52.750561  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:53.250207  146734 type.go:165] "Request Body" body=""
	I1222 22:52:53.250279  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:53.250649  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:53.750285  146734 type.go:165] "Request Body" body=""
	I1222 22:52:53.750382  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:53.750757  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:53.750818  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:53.825965  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:52:53.875648  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:52:53.878317  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:52:53.878441  146734 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1222 22:52:54.249881  146734 type.go:165] "Request Body" body=""
	I1222 22:52:54.249965  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:54.250291  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:54.749880  146734 type.go:165] "Request Body" body=""
	I1222 22:52:54.749992  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:54.750339  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:55.249961  146734 type.go:165] "Request Body" body=""
	I1222 22:52:55.250051  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:55.250408  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:55.750006  146734 type.go:165] "Request Body" body=""
	I1222 22:52:55.750102  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:55.750525  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:56.250142  146734 type.go:165] "Request Body" body=""
	I1222 22:52:56.250214  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:56.250523  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:56.250588  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:56.750239  146734 type.go:165] "Request Body" body=""
	I1222 22:52:56.750323  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:56.750714  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:57.250354  146734 type.go:165] "Request Body" body=""
	I1222 22:52:57.250424  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:57.250804  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:57.750628  146734 type.go:165] "Request Body" body=""
	I1222 22:52:57.750717  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:57.751065  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:58.249685  146734 type.go:165] "Request Body" body=""
	I1222 22:52:58.249757  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:58.250088  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:58.749668  146734 type.go:165] "Request Body" body=""
	I1222 22:52:58.749748  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:58.750061  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:58.750137  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:59.249726  146734 type.go:165] "Request Body" body=""
	I1222 22:52:59.249899  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:59.250271  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:59.749844  146734 type.go:165] "Request Body" body=""
	I1222 22:52:59.749922  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:59.750248  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:00.249741  146734 type.go:165] "Request Body" body=""
	I1222 22:53:00.249825  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:00.250192  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:00.749724  146734 type.go:165] "Request Body" body=""
	I1222 22:53:00.749826  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:00.750162  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:00.750221  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:01.249640  146734 type.go:165] "Request Body" body=""
	I1222 22:53:01.249734  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:01.250068  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:01.749640  146734 type.go:165] "Request Body" body=""
	I1222 22:53:01.749713  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:01.749993  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:02.249646  146734 type.go:165] "Request Body" body=""
	I1222 22:53:02.249726  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:02.250075  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:02.750082  146734 type.go:165] "Request Body" body=""
	I1222 22:53:02.750162  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:02.750495  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:02.750554  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:02.761644  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:53:02.811523  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:53:02.814145  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:53:02.814242  146734 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1222 22:53:02.815929  146734 out.go:179] * Enabled addons: 
	I1222 22:53:02.817068  146734 addons.go:530] duration metric: took 1m40.193946362s for enable addons: enabled=[]
	I1222 22:53:03.249698  146734 type.go:165] "Request Body" body=""
	I1222 22:53:03.249788  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:03.250104  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:03.749742  146734 type.go:165] "Request Body" body=""
	I1222 22:53:03.749825  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:03.750182  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:04.249738  146734 type.go:165] "Request Body" body=""
	I1222 22:53:04.249809  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:04.250142  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:04.749826  146734 type.go:165] "Request Body" body=""
	I1222 22:53:04.749903  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:04.750163  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:05.249723  146734 type.go:165] "Request Body" body=""
	I1222 22:53:05.249795  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:05.250136  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:05.250198  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:05.749730  146734 type.go:165] "Request Body" body=""
	I1222 22:53:05.749806  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:05.750146  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:06.249727  146734 type.go:165] "Request Body" body=""
	I1222 22:53:06.249793  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:06.250084  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:06.749693  146734 type.go:165] "Request Body" body=""
	I1222 22:53:06.749808  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:06.750149  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:07.249718  146734 type.go:165] "Request Body" body=""
	I1222 22:53:07.249825  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:07.250157  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:07.250228  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:07.749972  146734 type.go:165] "Request Body" body=""
	I1222 22:53:07.750046  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:07.750368  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:08.249951  146734 type.go:165] "Request Body" body=""
	I1222 22:53:08.250025  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:08.250370  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:08.750009  146734 type.go:165] "Request Body" body=""
	I1222 22:53:08.750097  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:08.750447  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:09.250029  146734 type.go:165] "Request Body" body=""
	I1222 22:53:09.250111  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:09.250445  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:09.250517  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:09.750056  146734 type.go:165] "Request Body" body=""
	I1222 22:53:09.750146  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:09.750490  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:10.250067  146734 type.go:165] "Request Body" body=""
	I1222 22:53:10.250142  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:10.250503  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:10.749730  146734 type.go:165] "Request Body" body=""
	I1222 22:53:10.749826  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:10.750162  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:11.249711  146734 type.go:165] "Request Body" body=""
	I1222 22:53:11.249778  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:11.250102  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:11.749714  146734 type.go:165] "Request Body" body=""
	I1222 22:53:11.749820  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:11.750162  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:11.750229  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:12.249740  146734 type.go:165] "Request Body" body=""
	I1222 22:53:12.249829  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:12.250202  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:12.750297  146734 type.go:165] "Request Body" body=""
	I1222 22:53:12.750400  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:12.750823  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:13.250490  146734 type.go:165] "Request Body" body=""
	I1222 22:53:13.250610  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:13.250943  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:13.750581  146734 type.go:165] "Request Body" body=""
	I1222 22:53:13.750675  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:13.751030  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:13.751101  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:14.249749  146734 type.go:165] "Request Body" body=""
	I1222 22:53:14.249848  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:14.250187  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:14.749731  146734 type.go:165] "Request Body" body=""
	I1222 22:53:14.749805  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:14.750129  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:15.249670  146734 type.go:165] "Request Body" body=""
	I1222 22:53:15.249753  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:15.250047  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:15.749620  146734 type.go:165] "Request Body" body=""
	I1222 22:53:15.749692  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:15.750012  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:16.249630  146734 type.go:165] "Request Body" body=""
	I1222 22:53:16.249712  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:16.250033  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:16.250099  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:16.749569  146734 type.go:165] "Request Body" body=""
	I1222 22:53:16.749652  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:16.750014  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:17.249620  146734 type.go:165] "Request Body" body=""
	I1222 22:53:17.249709  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:17.250072  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:17.749929  146734 type.go:165] "Request Body" body=""
	I1222 22:53:17.750004  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:17.750365  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:18.249926  146734 type.go:165] "Request Body" body=""
	I1222 22:53:18.250000  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:18.250371  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:18.250470  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:18.749909  146734 type.go:165] "Request Body" body=""
	I1222 22:53:18.749982  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:18.750366  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:19.249922  146734 type.go:165] "Request Body" body=""
	I1222 22:53:19.250017  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:19.250357  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:19.750009  146734 type.go:165] "Request Body" body=""
	I1222 22:53:19.750311  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:19.750720  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:20.249710  146734 type.go:165] "Request Body" body=""
	I1222 22:53:20.249792  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:20.250138  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:20.749729  146734 type.go:165] "Request Body" body=""
	I1222 22:53:20.749802  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:20.750122  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:20.750181  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:21.249727  146734 type.go:165] "Request Body" body=""
	I1222 22:53:21.249810  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:21.250104  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:21.749698  146734 type.go:165] "Request Body" body=""
	I1222 22:53:21.749771  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:21.750100  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:22.249641  146734 type.go:165] "Request Body" body=""
	I1222 22:53:22.249720  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:22.250039  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:22.750075  146734 type.go:165] "Request Body" body=""
	I1222 22:53:22.750156  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:22.750471  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:22.750535  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:23.250068  146734 type.go:165] "Request Body" body=""
	I1222 22:53:23.250145  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:23.250478  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:23.750036  146734 type.go:165] "Request Body" body=""
	I1222 22:53:23.750110  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:23.750437  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:24.250017  146734 type.go:165] "Request Body" body=""
	I1222 22:53:24.250093  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:24.250476  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:24.750180  146734 type.go:165] "Request Body" body=""
	I1222 22:53:24.750257  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:24.750588  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:24.750677  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:25.250225  146734 type.go:165] "Request Body" body=""
	I1222 22:53:25.250324  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:25.250676  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:25.749787  146734 type.go:165] "Request Body" body=""
	I1222 22:53:25.749860  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:25.750164  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:26.249730  146734 type.go:165] "Request Body" body=""
	I1222 22:53:26.249811  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:26.250140  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:26.749761  146734 type.go:165] "Request Body" body=""
	I1222 22:53:26.749838  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:26.750123  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:27.249731  146734 type.go:165] "Request Body" body=""
	I1222 22:53:27.249803  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:27.250133  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:27.250212  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:27.749969  146734 type.go:165] "Request Body" body=""
	I1222 22:53:27.750075  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:27.750451  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:28.250072  146734 type.go:165] "Request Body" body=""
	I1222 22:53:28.250148  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:28.250489  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:28.749678  146734 type.go:165] "Request Body" body=""
	I1222 22:53:28.749774  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:28.750036  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:29.249723  146734 type.go:165] "Request Body" body=""
	I1222 22:53:29.249804  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:29.250123  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:29.749729  146734 type.go:165] "Request Body" body=""
	I1222 22:53:29.749802  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:29.750160  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:29.750223  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:30.249702  146734 type.go:165] "Request Body" body=""
	I1222 22:53:30.249782  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:30.250134  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:30.749711  146734 type.go:165] "Request Body" body=""
	I1222 22:53:30.749794  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:30.750154  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:31.249754  146734 type.go:165] "Request Body" body=""
	I1222 22:53:31.249836  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:31.250160  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:31.749713  146734 type.go:165] "Request Body" body=""
	I1222 22:53:31.749788  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:31.750082  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:32.249744  146734 type.go:165] "Request Body" body=""
	I1222 22:53:32.249825  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:32.250166  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:32.250234  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:32.750188  146734 type.go:165] "Request Body" body=""
	I1222 22:53:32.750294  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:32.750695  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:33.250318  146734 type.go:165] "Request Body" body=""
	I1222 22:53:33.250417  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:33.250846  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:33.750507  146734 type.go:165] "Request Body" body=""
	I1222 22:53:33.750613  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:33.750990  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:34.249623  146734 type.go:165] "Request Body" body=""
	I1222 22:53:34.249700  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:34.250041  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:34.749622  146734 type.go:165] "Request Body" body=""
	I1222 22:53:34.749696  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:34.749991  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:34.750050  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:35.249689  146734 type.go:165] "Request Body" body=""
	I1222 22:53:35.249763  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:35.250117  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:35.749695  146734 type.go:165] "Request Body" body=""
	I1222 22:53:35.749776  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:35.750097  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:36.249798  146734 type.go:165] "Request Body" body=""
	I1222 22:53:36.249903  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:36.250277  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:36.749846  146734 type.go:165] "Request Body" body=""
	I1222 22:53:36.749925  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:36.750288  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:36.750366  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:37.249900  146734 type.go:165] "Request Body" body=""
	I1222 22:53:37.249980  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:37.250334  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:37.750193  146734 type.go:165] "Request Body" body=""
	I1222 22:53:37.750266  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:37.750582  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:38.250318  146734 type.go:165] "Request Body" body=""
	I1222 22:53:38.250402  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:38.250780  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:38.750488  146734 type.go:165] "Request Body" body=""
	I1222 22:53:38.750565  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:38.750945  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:38.751009  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:39.250555  146734 type.go:165] "Request Body" body=""
	I1222 22:53:39.250638  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:39.250958  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:39.749568  146734 type.go:165] "Request Body" body=""
	I1222 22:53:39.749673  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:39.750024  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:40.249671  146734 type.go:165] "Request Body" body=""
	I1222 22:53:40.249757  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:40.250102  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:40.749680  146734 type.go:165] "Request Body" body=""
	I1222 22:53:40.749755  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:40.750071  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:41.249740  146734 type.go:165] "Request Body" body=""
	I1222 22:53:41.249827  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:41.250168  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:41.250233  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:41.749722  146734 type.go:165] "Request Body" body=""
	I1222 22:53:41.749804  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:41.750139  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:42.249731  146734 type.go:165] "Request Body" body=""
	I1222 22:53:42.249824  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:42.250150  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:42.750207  146734 type.go:165] "Request Body" body=""
	I1222 22:53:42.750296  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:42.750623  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:43.250305  146734 type.go:165] "Request Body" body=""
	I1222 22:53:43.250405  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:43.250821  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:43.250898  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:43.750513  146734 type.go:165] "Request Body" body=""
	I1222 22:53:43.750619  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:43.750993  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:44.249564  146734 type.go:165] "Request Body" body=""
	I1222 22:53:44.249700  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:44.250049  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:44.749717  146734 type.go:165] "Request Body" body=""
	I1222 22:53:44.749793  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:44.750110  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:45.249673  146734 type.go:165] "Request Body" body=""
	I1222 22:53:45.249765  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:45.250110  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:45.749727  146734 type.go:165] "Request Body" body=""
	I1222 22:53:45.749834  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:45.750168  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:45.750236  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:46.249739  146734 type.go:165] "Request Body" body=""
	I1222 22:53:46.249823  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:46.250174  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:46.749761  146734 type.go:165] "Request Body" body=""
	I1222 22:53:46.749836  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:46.750164  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:47.249735  146734 type.go:165] "Request Body" body=""
	I1222 22:53:47.249821  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:47.250177  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:47.749948  146734 type.go:165] "Request Body" body=""
	I1222 22:53:47.750043  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:47.750397  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:47.750471  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:48.249836  146734 type.go:165] "Request Body" body=""
	I1222 22:53:48.249915  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:48.250168  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:48.749809  146734 type.go:165] "Request Body" body=""
	I1222 22:53:48.749888  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:48.750240  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:49.249818  146734 type.go:165] "Request Body" body=""
	I1222 22:53:49.249899  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:49.250266  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:49.749713  146734 type.go:165] "Request Body" body=""
	I1222 22:53:49.749790  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:49.750104  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:50.249693  146734 type.go:165] "Request Body" body=""
	I1222 22:53:50.249771  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:50.250117  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:50.250187  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:50.749730  146734 type.go:165] "Request Body" body=""
	I1222 22:53:50.749812  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:50.750159  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:51.249780  146734 type.go:165] "Request Body" body=""
	I1222 22:53:51.249878  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:51.250247  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:51.749718  146734 type.go:165] "Request Body" body=""
	I1222 22:53:51.749796  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:51.750134  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:52.249742  146734 type.go:165] "Request Body" body=""
	I1222 22:53:52.249827  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:52.250156  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:52.250222  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:52.750193  146734 type.go:165] "Request Body" body=""
	I1222 22:53:52.750262  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:52.750631  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:53.250282  146734 type.go:165] "Request Body" body=""
	I1222 22:53:53.250365  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:53.250710  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:53.750364  146734 type.go:165] "Request Body" body=""
	I1222 22:53:53.750466  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:53.750806  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:54.250614  146734 type.go:165] "Request Body" body=""
	I1222 22:53:54.250720  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:54.251091  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:54.251160  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:54.749719  146734 type.go:165] "Request Body" body=""
	I1222 22:53:54.749797  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:54.750115  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:55.249723  146734 type.go:165] "Request Body" body=""
	I1222 22:53:55.249798  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:55.250115  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:55.749721  146734 type.go:165] "Request Body" body=""
	I1222 22:53:55.749813  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:55.750120  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:56.249788  146734 type.go:165] "Request Body" body=""
	I1222 22:53:56.249864  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:56.250194  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:56.749823  146734 type.go:165] "Request Body" body=""
	I1222 22:53:56.749924  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:56.750260  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:56.750323  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:57.249824  146734 type.go:165] "Request Body" body=""
	I1222 22:53:57.249911  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:57.250258  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:57.750028  146734 type.go:165] "Request Body" body=""
	I1222 22:53:57.750101  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:57.750434  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:58.250067  146734 type.go:165] "Request Body" body=""
	I1222 22:53:58.250165  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:58.250680  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:58.750319  146734 type.go:165] "Request Body" body=""
	I1222 22:53:58.750397  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:58.750751  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:58.750816  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:59.250396  146734 type.go:165] "Request Body" body=""
	I1222 22:53:59.250469  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:59.250838  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:59.750501  146734 type.go:165] "Request Body" body=""
	I1222 22:53:59.750579  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:59.750961  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:00.250615  146734 type.go:165] "Request Body" body=""
	I1222 22:54:00.250698  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:00.251022  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:00.749626  146734 type.go:165] "Request Body" body=""
	I1222 22:54:00.749712  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:00.750067  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:01.249664  146734 type.go:165] "Request Body" body=""
	I1222 22:54:01.249759  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:01.250103  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:01.250170  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:01.749661  146734 type.go:165] "Request Body" body=""
	I1222 22:54:01.749759  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:01.750063  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:02.249721  146734 type.go:165] "Request Body" body=""
	I1222 22:54:02.249806  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:02.250139  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:02.750137  146734 type.go:165] "Request Body" body=""
	I1222 22:54:02.750239  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:02.750626  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:03.249783  146734 type.go:165] "Request Body" body=""
	I1222 22:54:03.249870  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:03.250198  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:03.250261  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:03.749859  146734 type.go:165] "Request Body" body=""
	I1222 22:54:03.749942  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:03.750266  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:04.249837  146734 type.go:165] "Request Body" body=""
	I1222 22:54:04.249924  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:04.250252  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:04.749814  146734 type.go:165] "Request Body" body=""
	I1222 22:54:04.749888  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:04.750218  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:05.249838  146734 type.go:165] "Request Body" body=""
	I1222 22:54:05.249920  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:05.250251  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:05.250311  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:05.749869  146734 type.go:165] "Request Body" body=""
	I1222 22:54:05.749946  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:05.750283  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:06.249832  146734 type.go:165] "Request Body" body=""
	I1222 22:54:06.249906  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:06.250221  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:06.749788  146734 type.go:165] "Request Body" body=""
	I1222 22:54:06.749871  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:06.750209  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:07.249776  146734 type.go:165] "Request Body" body=""
	I1222 22:54:07.249854  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:07.250183  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:07.749835  146734 type.go:165] "Request Body" body=""
	I1222 22:54:07.749920  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:07.750202  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:07.750265  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:08.249824  146734 type.go:165] "Request Body" body=""
	I1222 22:54:08.249910  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:08.250234  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:08.749844  146734 type.go:165] "Request Body" body=""
	I1222 22:54:08.749966  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:08.750291  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:09.249871  146734 type.go:165] "Request Body" body=""
	I1222 22:54:09.249975  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:09.250275  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:09.749927  146734 type.go:165] "Request Body" body=""
	I1222 22:54:09.750002  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:09.750347  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:09.750418  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:10.249886  146734 type.go:165] "Request Body" body=""
	I1222 22:54:10.249966  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:10.250298  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:10.749734  146734 type.go:165] "Request Body" body=""
	I1222 22:54:10.749817  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:10.750166  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:11.249744  146734 type.go:165] "Request Body" body=""
	I1222 22:54:11.249820  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:11.250144  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:11.749770  146734 type.go:165] "Request Body" body=""
	I1222 22:54:11.749862  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:11.750201  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:12.249794  146734 type.go:165] "Request Body" body=""
	I1222 22:54:12.249873  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:12.250162  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:12.250235  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:12.750120  146734 type.go:165] "Request Body" body=""
	I1222 22:54:12.750215  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:12.750539  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:13.250225  146734 type.go:165] "Request Body" body=""
	I1222 22:54:13.250297  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:13.250634  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:13.750359  146734 type.go:165] "Request Body" body=""
	I1222 22:54:13.750457  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:13.750827  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:14.250483  146734 type.go:165] "Request Body" body=""
	I1222 22:54:14.250556  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:14.250898  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:14.250968  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:14.750562  146734 type.go:165] "Request Body" body=""
	I1222 22:54:14.750659  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:14.750987  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:15.250672  146734 type.go:165] "Request Body" body=""
	I1222 22:54:15.250773  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:15.251113  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:15.749701  146734 type.go:165] "Request Body" body=""
	I1222 22:54:15.749784  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:15.750120  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:16.249715  146734 type.go:165] "Request Body" body=""
	I1222 22:54:16.249790  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:16.250112  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:16.749747  146734 type.go:165] "Request Body" body=""
	I1222 22:54:16.749839  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:16.750218  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:16.750293  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:17.249782  146734 type.go:165] "Request Body" body=""
	I1222 22:54:17.249860  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:17.250177  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:17.750000  146734 type.go:165] "Request Body" body=""
	I1222 22:54:17.750088  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:17.750446  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:18.249735  146734 type.go:165] "Request Body" body=""
	I1222 22:54:18.249825  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:18.250179  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:18.749732  146734 type.go:165] "Request Body" body=""
	I1222 22:54:18.749816  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:18.750154  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:19.249741  146734 type.go:165] "Request Body" body=""
	I1222 22:54:19.249830  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:19.250196  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:19.250264  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:19.749730  146734 type.go:165] "Request Body" body=""
	I1222 22:54:19.749819  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:19.750126  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:20.249707  146734 type.go:165] "Request Body" body=""
	I1222 22:54:20.249785  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:20.250091  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:20.749724  146734 type.go:165] "Request Body" body=""
	I1222 22:54:20.749813  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:20.750150  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:21.249721  146734 type.go:165] "Request Body" body=""
	I1222 22:54:21.249797  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:21.250083  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:21.749694  146734 type.go:165] "Request Body" body=""
	I1222 22:54:21.749796  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:21.750142  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:21.750217  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:22.249754  146734 type.go:165] "Request Body" body=""
	I1222 22:54:22.249830  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:22.250160  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:22.750118  146734 type.go:165] "Request Body" body=""
	I1222 22:54:22.750196  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:22.750523  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:23.250322  146734 type.go:165] "Request Body" body=""
	I1222 22:54:23.250400  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:23.250767  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:23.750404  146734 type.go:165] "Request Body" body=""
	I1222 22:54:23.750488  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:23.750857  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:23.750933  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:24.249571  146734 type.go:165] "Request Body" body=""
	I1222 22:54:24.249681  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:24.250051  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:24.749643  146734 type.go:165] "Request Body" body=""
	I1222 22:54:24.749721  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:24.750066  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:25.249682  146734 type.go:165] "Request Body" body=""
	I1222 22:54:25.249768  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:25.250100  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:25.749662  146734 type.go:165] "Request Body" body=""
	I1222 22:54:25.749739  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:25.750065  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:26.249723  146734 type.go:165] "Request Body" body=""
	I1222 22:54:26.249806  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:26.250109  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:26.250210  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:26.749706  146734 type.go:165] "Request Body" body=""
	I1222 22:54:26.749790  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:26.750145  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:27.249715  146734 type.go:165] "Request Body" body=""
	I1222 22:54:27.249788  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:27.250030  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:27.749914  146734 type.go:165] "Request Body" body=""
	I1222 22:54:27.749988  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:27.750304  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:28.249902  146734 type.go:165] "Request Body" body=""
	I1222 22:54:28.249990  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:28.250337  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:28.250411  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:28.749874  146734 type.go:165] "Request Body" body=""
	I1222 22:54:28.749948  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:28.750244  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:29.249918  146734 type.go:165] "Request Body" body=""
	I1222 22:54:29.249996  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:29.250346  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:29.749881  146734 type.go:165] "Request Body" body=""
	I1222 22:54:29.749960  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:29.750287  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:30.249721  146734 type.go:165] "Request Body" body=""
	I1222 22:54:30.249800  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:30.250101  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:30.749693  146734 type.go:165] "Request Body" body=""
	I1222 22:54:30.749779  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:30.750112  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:30.750186  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:31.249759  146734 type.go:165] "Request Body" body=""
	I1222 22:54:31.249844  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:31.250182  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:31.749729  146734 type.go:165] "Request Body" body=""
	I1222 22:54:31.749814  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:31.750130  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:32.249782  146734 type.go:165] "Request Body" body=""
	I1222 22:54:32.249862  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:32.250186  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:32.750121  146734 type.go:165] "Request Body" body=""
	I1222 22:54:32.750207  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:32.750542  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:32.750634  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:33.249784  146734 type.go:165] "Request Body" body=""
	I1222 22:54:33.249855  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:33.250171  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:33.749810  146734 type.go:165] "Request Body" body=""
	I1222 22:54:33.749888  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:33.750215  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:34.249774  146734 type.go:165] "Request Body" body=""
	I1222 22:54:34.249851  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:34.250176  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:34.749708  146734 type.go:165] "Request Body" body=""
	I1222 22:54:34.749777  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:34.750090  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:35.249688  146734 type.go:165] "Request Body" body=""
	I1222 22:54:35.249766  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:35.250089  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:35.250148  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:35.749687  146734 type.go:165] "Request Body" body=""
	I1222 22:54:35.749759  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:35.750079  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:36.249639  146734 type.go:165] "Request Body" body=""
	I1222 22:54:36.249710  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:36.250032  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:36.749672  146734 type.go:165] "Request Body" body=""
	I1222 22:54:36.749746  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:36.750070  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:37.249695  146734 type.go:165] "Request Body" body=""
	I1222 22:54:37.249776  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:37.250115  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:37.250177  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:37.750002  146734 type.go:165] "Request Body" body=""
	I1222 22:54:37.750095  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:37.750456  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:38.250050  146734 type.go:165] "Request Body" body=""
	I1222 22:54:38.250125  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:38.250452  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:38.749992  146734 type.go:165] "Request Body" body=""
	I1222 22:54:38.750073  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:38.750425  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:39.249707  146734 type.go:165] "Request Body" body=""
	I1222 22:54:39.249774  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:39.250018  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:39.749673  146734 type.go:165] "Request Body" body=""
	I1222 22:54:39.749773  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:39.750117  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:39.750178  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:40.249716  146734 type.go:165] "Request Body" body=""
	I1222 22:54:40.249789  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:40.250105  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:40.749704  146734 type.go:165] "Request Body" body=""
	I1222 22:54:40.749797  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:40.750133  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:41.249799  146734 type.go:165] "Request Body" body=""
	I1222 22:54:41.249873  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:41.250194  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:41.749789  146734 type.go:165] "Request Body" body=""
	I1222 22:54:41.749883  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:41.750225  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:41.750287  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:42.249727  146734 type.go:165] "Request Body" body=""
	I1222 22:54:42.249800  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:42.250096  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:42.750174  146734 type.go:165] "Request Body" body=""
	I1222 22:54:42.750257  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:42.750651  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:43.250236  146734 type.go:165] "Request Body" body=""
	I1222 22:54:43.250313  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:43.250694  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:43.750289  146734 type.go:165] "Request Body" body=""
	I1222 22:54:43.750355  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:43.750686  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:43.750759  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:44.250285  146734 type.go:165] "Request Body" body=""
	I1222 22:54:44.250357  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:44.250709  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:44.750302  146734 type.go:165] "Request Body" body=""
	I1222 22:54:44.750384  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:44.750746  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:45.250350  146734 type.go:165] "Request Body" body=""
	I1222 22:54:45.250430  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:45.250764  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:45.750440  146734 type.go:165] "Request Body" body=""
	I1222 22:54:45.750515  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:45.750874  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:45.750949  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:46.250491  146734 type.go:165] "Request Body" body=""
	I1222 22:54:46.250567  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:46.250913  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:46.750576  146734 type.go:165] "Request Body" body=""
	I1222 22:54:46.750673  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:46.751016  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:47.249570  146734 type.go:165] "Request Body" body=""
	I1222 22:54:47.249660  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:47.249996  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:47.749731  146734 type.go:165] "Request Body" body=""
	I1222 22:54:47.749824  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:47.750169  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:48.249741  146734 type.go:165] "Request Body" body=""
	I1222 22:54:48.249822  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:48.250159  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:48.250226  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:48.749795  146734 type.go:165] "Request Body" body=""
	I1222 22:54:48.749894  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:48.750223  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:49.249848  146734 type.go:165] "Request Body" body=""
	I1222 22:54:49.249934  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:49.250267  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:49.749720  146734 type.go:165] "Request Body" body=""
	I1222 22:54:49.749804  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:49.750126  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:50.249670  146734 type.go:165] "Request Body" body=""
	I1222 22:54:50.249750  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:50.250079  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:50.749692  146734 type.go:165] "Request Body" body=""
	I1222 22:54:50.749766  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:50.750099  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:50.750164  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:51.249654  146734 type.go:165] "Request Body" body=""
	I1222 22:54:51.249734  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:51.250056  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:51.749698  146734 type.go:165] "Request Body" body=""
	I1222 22:54:51.749776  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:51.750104  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:52.249708  146734 type.go:165] "Request Body" body=""
	I1222 22:54:52.249803  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:52.250143  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:52.750168  146734 type.go:165] "Request Body" body=""
	I1222 22:54:52.750240  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:52.750636  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:52.750725  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:53.250283  146734 type.go:165] "Request Body" body=""
	I1222 22:54:53.250369  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:53.250696  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:53.750388  146734 type.go:165] "Request Body" body=""
	I1222 22:54:53.750469  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:53.750824  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:54.250460  146734 type.go:165] "Request Body" body=""
	I1222 22:54:54.250538  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:54.250871  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:54.750538  146734 type.go:165] "Request Body" body=""
	I1222 22:54:54.750645  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:54.750985  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:54.751057  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:55.249649  146734 type.go:165] "Request Body" body=""
	I1222 22:54:55.249732  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:55.250057  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:55.749707  146734 type.go:165] "Request Body" body=""
	I1222 22:54:55.749790  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:55.750108  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:56.249688  146734 type.go:165] "Request Body" body=""
	I1222 22:54:56.249766  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:56.250095  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:56.749726  146734 type.go:165] "Request Body" body=""
	I1222 22:54:56.749805  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:56.750146  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:57.249569  146734 type.go:165] "Request Body" body=""
	I1222 22:54:57.249701  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:57.250071  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:57.250148  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:57.750019  146734 type.go:165] "Request Body" body=""
	I1222 22:54:57.750094  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:57.750442  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:58.249983  146734 type.go:165] "Request Body" body=""
	I1222 22:54:58.250056  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:58.250388  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:58.749735  146734 type.go:165] "Request Body" body=""
	I1222 22:54:58.749813  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:58.750122  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:59.249693  146734 type.go:165] "Request Body" body=""
	I1222 22:54:59.249770  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:59.250112  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:59.250182  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:59.749735  146734 type.go:165] "Request Body" body=""
	I1222 22:54:59.749823  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:59.750156  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:00.249747  146734 type.go:165] "Request Body" body=""
	I1222 22:55:00.249832  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:00.250162  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:00.749730  146734 type.go:165] "Request Body" body=""
	I1222 22:55:00.749817  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:00.750139  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:01.249720  146734 type.go:165] "Request Body" body=""
	I1222 22:55:01.249796  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:01.250170  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:01.250234  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:01.749747  146734 type.go:165] "Request Body" body=""
	I1222 22:55:01.749834  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:01.750177  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:02.249773  146734 type.go:165] "Request Body" body=""
	I1222 22:55:02.249853  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:02.250190  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:02.750185  146734 type.go:165] "Request Body" body=""
	I1222 22:55:02.750272  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:02.750679  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:03.250410  146734 type.go:165] "Request Body" body=""
	I1222 22:55:03.250484  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:03.250800  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:03.250864  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:03.750490  146734 type.go:165] "Request Body" body=""
	I1222 22:55:03.750574  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:03.750953  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:04.250588  146734 type.go:165] "Request Body" body=""
	I1222 22:55:04.250700  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:04.251072  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:04.749564  146734 type.go:165] "Request Body" body=""
	I1222 22:55:04.749679  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:04.750072  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:05.249662  146734 type.go:165] "Request Body" body=""
	I1222 22:55:05.249748  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:05.250095  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:05.749749  146734 type.go:165] "Request Body" body=""
	I1222 22:55:05.749828  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:05.750162  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:05.750227  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:06.249723  146734 type.go:165] "Request Body" body=""
	I1222 22:55:06.249815  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:06.250126  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:06.749723  146734 type.go:165] "Request Body" body=""
	I1222 22:55:06.749809  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:06.750146  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:07.249732  146734 type.go:165] "Request Body" body=""
	I1222 22:55:07.249819  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:07.250155  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:07.749808  146734 type.go:165] "Request Body" body=""
	I1222 22:55:07.749885  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:07.750206  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:07.750279  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:08.249799  146734 type.go:165] "Request Body" body=""
	I1222 22:55:08.249870  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:08.250200  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:08.749784  146734 type.go:165] "Request Body" body=""
	I1222 22:55:08.749858  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:08.750157  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:09.249830  146734 type.go:165] "Request Body" body=""
	I1222 22:55:09.249933  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:09.250259  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:09.749895  146734 type.go:165] "Request Body" body=""
	I1222 22:55:09.749973  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:09.750307  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:09.750375  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:10.249899  146734 type.go:165] "Request Body" body=""
	I1222 22:55:10.249974  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:10.250316  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:10.749711  146734 type.go:165] "Request Body" body=""
	I1222 22:55:10.749786  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:10.750166  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:11.249748  146734 type.go:165] "Request Body" body=""
	I1222 22:55:11.249819  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:11.250148  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:11.749856  146734 type.go:165] "Request Body" body=""
	I1222 22:55:11.749994  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:11.750335  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:12.249957  146734 type.go:165] "Request Body" body=""
	I1222 22:55:12.250025  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:12.250328  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:12.250386  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:12.750340  146734 type.go:165] "Request Body" body=""
	I1222 22:55:12.750433  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:12.750899  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:13.250509  146734 type.go:165] "Request Body" body=""
	I1222 22:55:13.250620  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:13.250953  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:13.750574  146734 type.go:165] "Request Body" body=""
	I1222 22:55:13.750664  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:13.750913  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:14.249616  146734 type.go:165] "Request Body" body=""
	I1222 22:55:14.249702  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:14.250052  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:14.749677  146734 type.go:165] "Request Body" body=""
	I1222 22:55:14.749762  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:14.750119  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:14.750178  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:15.249707  146734 type.go:165] "Request Body" body=""
	I1222 22:55:15.249782  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:15.250113  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:15.749689  146734 type.go:165] "Request Body" body=""
	I1222 22:55:15.749763  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:15.750110  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:16.249711  146734 type.go:165] "Request Body" body=""
	I1222 22:55:16.249796  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:16.250119  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:16.749704  146734 type.go:165] "Request Body" body=""
	I1222 22:55:16.749779  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:16.750098  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:17.249718  146734 type.go:165] "Request Body" body=""
	I1222 22:55:17.249799  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:17.250129  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:17.250204  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:17.749909  146734 type.go:165] "Request Body" body=""
	I1222 22:55:17.750010  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:17.750386  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:18.250005  146734 type.go:165] "Request Body" body=""
	I1222 22:55:18.250097  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:18.250519  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:18.750237  146734 type.go:165] "Request Body" body=""
	I1222 22:55:18.750318  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:18.750671  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:19.250360  146734 type.go:165] "Request Body" body=""
	I1222 22:55:19.250435  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:19.250782  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:19.250850  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:19.750393  146734 type.go:165] "Request Body" body=""
	I1222 22:55:19.750473  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:19.750812  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:20.250244  146734 type.go:165] "Request Body" body=""
	I1222 22:55:20.250340  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:20.250766  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:20.749617  146734 type.go:165] "Request Body" body=""
	I1222 22:55:20.749731  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:20.750232  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:21.249738  146734 type.go:165] "Request Body" body=""
	I1222 22:55:21.249818  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:21.250145  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:21.749880  146734 type.go:165] "Request Body" body=""
	I1222 22:55:21.749962  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:21.750345  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:21.750422  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:22.249651  146734 type.go:165] "Request Body" body=""
	I1222 22:55:22.249745  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:22.250098  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:22.750064  146734 type.go:165] "Request Body" body=""
	I1222 22:55:22.750133  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:22.750512  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:23.250053  146734 type.go:165] "Request Body" body=""
	I1222 22:55:23.250126  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:23.250447  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:23.750003  146734 type.go:165] "Request Body" body=""
	I1222 22:55:23.750079  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:23.750487  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:23.750580  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:24.249755  146734 type.go:165] "Request Body" body=""
	I1222 22:55:24.249827  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:24.250165  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:24.749758  146734 type.go:165] "Request Body" body=""
	I1222 22:55:24.749828  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:24.750192  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:25.249696  146734 type.go:165] "Request Body" body=""
	I1222 22:55:25.249769  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:25.250072  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:25.749697  146734 type.go:165] "Request Body" body=""
	I1222 22:55:25.749783  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:25.750159  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:26.249886  146734 type.go:165] "Request Body" body=""
	I1222 22:55:26.249958  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:26.250275  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:26.250336  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:26.750067  146734 type.go:165] "Request Body" body=""
	I1222 22:55:26.750154  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:26.750517  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:27.250329  146734 type.go:165] "Request Body" body=""
	I1222 22:55:27.250410  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:27.250697  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:27.750580  146734 type.go:165] "Request Body" body=""
	I1222 22:55:27.750669  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:27.751022  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:28.249726  146734 type.go:165] "Request Body" body=""
	I1222 22:55:28.249808  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:28.250109  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:28.749730  146734 type.go:165] "Request Body" body=""
	I1222 22:55:28.749811  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:28.750156  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:28.750237  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:29.249909  146734 type.go:165] "Request Body" body=""
	I1222 22:55:29.249982  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:29.250305  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:29.750035  146734 type.go:165] "Request Body" body=""
	I1222 22:55:29.750108  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:29.750450  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:30.250215  146734 type.go:165] "Request Body" body=""
	I1222 22:55:30.250285  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:30.250646  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:30.750488  146734 type.go:165] "Request Body" body=""
	I1222 22:55:30.750567  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:30.750921  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:30.750983  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:31.249701  146734 type.go:165] "Request Body" body=""
	I1222 22:55:31.249780  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:31.250099  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:31.749724  146734 type.go:165] "Request Body" body=""
	I1222 22:55:31.749792  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:31.750145  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:32.249871  146734 type.go:165] "Request Body" body=""
	I1222 22:55:32.249942  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:32.250315  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:32.750300  146734 type.go:165] "Request Body" body=""
	I1222 22:55:32.750384  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:32.750771  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:33.250608  146734 type.go:165] "Request Body" body=""
	I1222 22:55:33.250702  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:33.251142  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:33.251214  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:33.749937  146734 type.go:165] "Request Body" body=""
	I1222 22:55:33.750014  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:33.750358  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:34.250212  146734 type.go:165] "Request Body" body=""
	I1222 22:55:34.250294  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:34.250661  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:34.750449  146734 type.go:165] "Request Body" body=""
	I1222 22:55:34.750521  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:34.750895  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:35.249645  146734 type.go:165] "Request Body" body=""
	I1222 22:55:35.249716  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:35.250054  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:35.749839  146734 type.go:165] "Request Body" body=""
	I1222 22:55:35.749916  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:35.750258  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:35.750321  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:36.249744  146734 type.go:165] "Request Body" body=""
	I1222 22:55:36.249815  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:36.250128  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:36.749704  146734 type.go:165] "Request Body" body=""
	I1222 22:55:36.749780  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:36.750111  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:37.249872  146734 type.go:165] "Request Body" body=""
	I1222 22:55:37.249947  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:37.250270  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:37.750107  146734 type.go:165] "Request Body" body=""
	I1222 22:55:37.750195  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:37.750537  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:37.750607  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:38.250386  146734 type.go:165] "Request Body" body=""
	I1222 22:55:38.250473  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:38.250844  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:38.749618  146734 type.go:165] "Request Body" body=""
	I1222 22:55:38.749699  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:38.750037  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:39.249712  146734 type.go:165] "Request Body" body=""
	I1222 22:55:39.249799  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:39.250114  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:39.749869  146734 type.go:165] "Request Body" body=""
	I1222 22:55:39.749958  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:39.750319  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:40.249717  146734 type.go:165] "Request Body" body=""
	I1222 22:55:40.249789  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:40.250125  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:40.250192  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:40.749723  146734 type.go:165] "Request Body" body=""
	I1222 22:55:40.749805  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:40.750122  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:41.249867  146734 type.go:165] "Request Body" body=""
	I1222 22:55:41.249964  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:41.250300  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:41.750030  146734 type.go:165] "Request Body" body=""
	I1222 22:55:41.750104  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:41.750449  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:42.250237  146734 type.go:165] "Request Body" body=""
	I1222 22:55:42.250316  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:42.250702  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:42.250790  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:42.749698  146734 type.go:165] "Request Body" body=""
	I1222 22:55:42.749778  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:42.750156  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:43.249908  146734 type.go:165] "Request Body" body=""
	I1222 22:55:43.249984  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:43.250316  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:43.749713  146734 type.go:165] "Request Body" body=""
	I1222 22:55:43.749783  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:43.750109  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:44.249847  146734 type.go:165] "Request Body" body=""
	I1222 22:55:44.249920  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:44.250250  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:44.749977  146734 type.go:165] "Request Body" body=""
	I1222 22:55:44.750059  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:44.750429  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:44.750493  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:45.250259  146734 type.go:165] "Request Body" body=""
	I1222 22:55:45.250340  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:45.250671  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:45.750529  146734 type.go:165] "Request Body" body=""
	I1222 22:55:45.750635  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:45.750999  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:46.249762  146734 type.go:165] "Request Body" body=""
	I1222 22:55:46.249839  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:46.250179  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:46.749734  146734 type.go:165] "Request Body" body=""
	I1222 22:55:46.749810  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:46.750159  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:47.249888  146734 type.go:165] "Request Body" body=""
	I1222 22:55:47.249964  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:47.250293  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:47.250367  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:47.750084  146734 type.go:165] "Request Body" body=""
	I1222 22:55:47.750158  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:47.750514  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:48.250329  146734 type.go:165] "Request Body" body=""
	I1222 22:55:48.250397  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:48.250735  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:48.750528  146734 type.go:165] "Request Body" body=""
	I1222 22:55:48.750629  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:48.750960  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:49.249711  146734 type.go:165] "Request Body" body=""
	I1222 22:55:49.249788  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:49.250109  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:49.749756  146734 type.go:165] "Request Body" body=""
	I1222 22:55:49.749840  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:49.750202  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:49.750280  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:50.250567  146734 type.go:165] "Request Body" body=""
	I1222 22:55:50.250668  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:50.251016  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:50.749759  146734 type.go:165] "Request Body" body=""
	I1222 22:55:50.749830  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:50.750146  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:51.249720  146734 type.go:165] "Request Body" body=""
	I1222 22:55:51.249810  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:51.250134  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:51.749846  146734 type.go:165] "Request Body" body=""
	I1222 22:55:51.749921  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:51.750215  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:52.249929  146734 type.go:165] "Request Body" body=""
	I1222 22:55:52.250026  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:52.250348  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:52.250418  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:52.750253  146734 type.go:165] "Request Body" body=""
	I1222 22:55:52.750351  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:52.750714  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:53.250551  146734 type.go:165] "Request Body" body=""
	I1222 22:55:53.250675  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:53.251019  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:53.749793  146734 type.go:165] "Request Body" body=""
	I1222 22:55:53.749867  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:53.750205  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:54.249741  146734 type.go:165] "Request Body" body=""
	I1222 22:55:54.249830  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:54.250172  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:54.749935  146734 type.go:165] "Request Body" body=""
	I1222 22:55:54.750018  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:54.750343  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:54.750406  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:55.250174  146734 type.go:165] "Request Body" body=""
	I1222 22:55:55.250255  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:55.250582  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:55.750396  146734 type.go:165] "Request Body" body=""
	I1222 22:55:55.750466  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:55.750800  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:56.250589  146734 type.go:165] "Request Body" body=""
	I1222 22:55:56.250683  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:56.251003  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:56.749740  146734 type.go:165] "Request Body" body=""
	I1222 22:55:56.749822  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:56.750149  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:57.249700  146734 type.go:165] "Request Body" body=""
	I1222 22:55:57.249767  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:57.250019  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:57.250065  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:57.749911  146734 type.go:165] "Request Body" body=""
	I1222 22:55:57.749984  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:57.750312  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:58.250079  146734 type.go:165] "Request Body" body=""
	I1222 22:55:58.250158  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:58.250482  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:58.749773  146734 type.go:165] "Request Body" body=""
	I1222 22:55:58.749843  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:58.750137  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:59.249988  146734 type.go:165] "Request Body" body=""
	I1222 22:55:59.250074  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:59.250366  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:59.250416  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:59.750170  146734 type.go:165] "Request Body" body=""
	I1222 22:55:59.750247  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:59.750648  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:00.250540  146734 type.go:165] "Request Body" body=""
	I1222 22:56:00.250654  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:00.250961  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:00.749739  146734 type.go:165] "Request Body" body=""
	I1222 22:56:00.749823  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:00.750177  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:01.249895  146734 type.go:165] "Request Body" body=""
	I1222 22:56:01.249986  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:01.250340  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:01.750038  146734 type.go:165] "Request Body" body=""
	I1222 22:56:01.750113  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:01.750490  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:01.750581  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:02.250443  146734 type.go:165] "Request Body" body=""
	I1222 22:56:02.250525  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:02.250895  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:02.749821  146734 type.go:165] "Request Body" body=""
	I1222 22:56:02.749917  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:02.750273  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:03.250163  146734 type.go:165] "Request Body" body=""
	I1222 22:56:03.250251  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:03.250624  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:03.750411  146734 type.go:165] "Request Body" body=""
	I1222 22:56:03.750512  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:03.750893  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:03.750957  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:04.249658  146734 type.go:165] "Request Body" body=""
	I1222 22:56:04.249737  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:04.250088  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:04.749706  146734 type.go:165] "Request Body" body=""
	I1222 22:56:04.749798  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:04.750106  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:05.249956  146734 type.go:165] "Request Body" body=""
	I1222 22:56:05.250031  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:05.250365  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:05.750235  146734 type.go:165] "Request Body" body=""
	I1222 22:56:05.750322  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:05.750688  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:06.250487  146734 type.go:165] "Request Body" body=""
	I1222 22:56:06.250559  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:06.250842  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:06.250893  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:06.749620  146734 type.go:165] "Request Body" body=""
	I1222 22:56:06.749705  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:06.750048  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:07.249785  146734 type.go:165] "Request Body" body=""
	I1222 22:56:07.249875  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:07.250216  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:07.750003  146734 type.go:165] "Request Body" body=""
	I1222 22:56:07.750078  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:07.750408  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:08.250236  146734 type.go:165] "Request Body" body=""
	I1222 22:56:08.250327  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:08.250694  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:08.750527  146734 type.go:165] "Request Body" body=""
	I1222 22:56:08.750631  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:08.750990  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:08.751068  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:09.249747  146734 type.go:165] "Request Body" body=""
	I1222 22:56:09.249842  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:09.250172  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:09.749886  146734 type.go:165] "Request Body" body=""
	I1222 22:56:09.749962  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:09.750308  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:10.249647  146734 type.go:165] "Request Body" body=""
	I1222 22:56:10.249737  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:10.250057  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:10.749709  146734 type.go:165] "Request Body" body=""
	I1222 22:56:10.749783  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:10.750127  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:11.249845  146734 type.go:165] "Request Body" body=""
	I1222 22:56:11.249934  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:11.250260  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:11.250328  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:11.749676  146734 type.go:165] "Request Body" body=""
	I1222 22:56:11.749752  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:11.750075  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:12.249809  146734 type.go:165] "Request Body" body=""
	I1222 22:56:12.249905  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:12.250221  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:12.750192  146734 type.go:165] "Request Body" body=""
	I1222 22:56:12.750274  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:12.750624  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:13.250511  146734 type.go:165] "Request Body" body=""
	I1222 22:56:13.250619  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:13.250949  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:13.251021  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:13.749706  146734 type.go:165] "Request Body" body=""
	I1222 22:56:13.749791  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:13.750112  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:14.249837  146734 type.go:165] "Request Body" body=""
	I1222 22:56:14.249926  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:14.250269  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:14.749994  146734 type.go:165] "Request Body" body=""
	I1222 22:56:14.750085  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:14.750445  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:15.250241  146734 type.go:165] "Request Body" body=""
	I1222 22:56:15.250328  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:15.250679  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:15.750491  146734 type.go:165] "Request Body" body=""
	I1222 22:56:15.750582  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:15.750940  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:15.751003  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:16.249702  146734 type.go:165] "Request Body" body=""
	I1222 22:56:16.249792  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:16.250131  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:16.749842  146734 type.go:165] "Request Body" body=""
	I1222 22:56:16.749926  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:16.750230  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:17.249944  146734 type.go:165] "Request Body" body=""
	I1222 22:56:17.250027  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:17.250345  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:17.750037  146734 type.go:165] "Request Body" body=""
	I1222 22:56:17.750126  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:17.750464  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:18.249806  146734 type.go:165] "Request Body" body=""
	I1222 22:56:18.249894  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:18.250221  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:18.250280  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:18.749968  146734 type.go:165] "Request Body" body=""
	I1222 22:56:18.750042  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:18.750375  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:19.250188  146734 type.go:165] "Request Body" body=""
	I1222 22:56:19.250274  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:19.250616  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:19.750444  146734 type.go:165] "Request Body" body=""
	I1222 22:56:19.750535  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:19.750879  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:20.249610  146734 type.go:165] "Request Body" body=""
	I1222 22:56:20.249709  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:20.250048  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:20.749785  146734 type.go:165] "Request Body" body=""
	I1222 22:56:20.749875  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:20.750205  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:20.750279  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:21.249734  146734 type.go:165] "Request Body" body=""
	I1222 22:56:21.249827  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:21.250200  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:21.749682  146734 type.go:165] "Request Body" body=""
	I1222 22:56:21.749765  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:21.750114  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:22.249836  146734 type.go:165] "Request Body" body=""
	I1222 22:56:22.249924  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:22.250264  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:22.750260  146734 type.go:165] "Request Body" body=""
	I1222 22:56:22.750374  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:22.750715  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:22.750789  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:23.250585  146734 type.go:165] "Request Body" body=""
	I1222 22:56:23.250698  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:23.251037  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:23.749764  146734 type.go:165] "Request Body" body=""
	I1222 22:56:23.749845  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:23.750177  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:24.249961  146734 type.go:165] "Request Body" body=""
	I1222 22:56:24.250049  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:24.250374  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:24.750225  146734 type.go:165] "Request Body" body=""
	I1222 22:56:24.750310  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:24.750702  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:25.250611  146734 type.go:165] "Request Body" body=""
	I1222 22:56:25.250705  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:25.251045  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:25.251126  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:25.749728  146734 type.go:165] "Request Body" body=""
	I1222 22:56:25.749801  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:25.750109  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:26.249827  146734 type.go:165] "Request Body" body=""
	I1222 22:56:26.249913  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:26.250265  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:26.750059  146734 type.go:165] "Request Body" body=""
	I1222 22:56:26.750142  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:26.750486  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:27.250307  146734 type.go:165] "Request Body" body=""
	I1222 22:56:27.250390  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:27.250801  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:27.749556  146734 type.go:165] "Request Body" body=""
	I1222 22:56:27.749670  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:27.749997  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:27.750062  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:28.249747  146734 type.go:165] "Request Body" body=""
	I1222 22:56:28.249843  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:28.250181  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:28.749928  146734 type.go:165] "Request Body" body=""
	I1222 22:56:28.750013  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:28.750333  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:29.250176  146734 type.go:165] "Request Body" body=""
	I1222 22:56:29.250253  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:29.250628  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:29.750430  146734 type.go:165] "Request Body" body=""
	I1222 22:56:29.750516  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:29.750880  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:29.750941  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:30.249655  146734 type.go:165] "Request Body" body=""
	I1222 22:56:30.249745  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:30.250057  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:30.749762  146734 type.go:165] "Request Body" body=""
	I1222 22:56:30.749841  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:30.750176  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:31.249900  146734 type.go:165] "Request Body" body=""
	I1222 22:56:31.249991  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:31.250332  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:31.749728  146734 type.go:165] "Request Body" body=""
	I1222 22:56:31.749800  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:31.750161  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:32.249910  146734 type.go:165] "Request Body" body=""
	I1222 22:56:32.249987  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:32.250372  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:32.250439  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:32.750202  146734 type.go:165] "Request Body" body=""
	I1222 22:56:32.750290  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:32.750667  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:33.250449  146734 type.go:165] "Request Body" body=""
	I1222 22:56:33.250524  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:33.250855  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:33.749570  146734 type.go:165] "Request Body" body=""
	I1222 22:56:33.749674  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:33.750002  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:34.249728  146734 type.go:165] "Request Body" body=""
	I1222 22:56:34.249812  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:34.250144  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:34.749746  146734 type.go:165] "Request Body" body=""
	I1222 22:56:34.749820  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:34.750119  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:34.750176  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:35.249857  146734 type.go:165] "Request Body" body=""
	I1222 22:56:35.249936  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:35.250259  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:35.750048  146734 type.go:165] "Request Body" body=""
	I1222 22:56:35.750143  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:35.750524  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:36.250349  146734 type.go:165] "Request Body" body=""
	I1222 22:56:36.250426  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:36.250769  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:36.750203  146734 type.go:165] "Request Body" body=""
	I1222 22:56:36.750279  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:36.750663  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:36.750725  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:37.250497  146734 type.go:165] "Request Body" body=""
	I1222 22:56:37.250572  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:37.251039  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:37.749807  146734 type.go:165] "Request Body" body=""
	I1222 22:56:37.749884  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:37.750203  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:38.249961  146734 type.go:165] "Request Body" body=""
	I1222 22:56:38.250044  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:38.250402  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:38.750242  146734 type.go:165] "Request Body" body=""
	I1222 22:56:38.750337  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:38.750714  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:38.750783  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:39.249573  146734 type.go:165] "Request Body" body=""
	I1222 22:56:39.249685  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:39.250006  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:39.749755  146734 type.go:165] "Request Body" body=""
	I1222 22:56:39.749837  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:39.750176  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:40.249985  146734 type.go:165] "Request Body" body=""
	I1222 22:56:40.250068  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:40.250462  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:40.749791  146734 type.go:165] "Request Body" body=""
	I1222 22:56:40.749864  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:40.750168  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:41.249972  146734 type.go:165] "Request Body" body=""
	I1222 22:56:41.250067  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:41.250448  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:41.250511  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:41.750285  146734 type.go:165] "Request Body" body=""
	I1222 22:56:41.750360  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:41.750722  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:42.250530  146734 type.go:165] "Request Body" body=""
	I1222 22:56:42.250634  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:42.251005  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:42.750176  146734 type.go:165] "Request Body" body=""
	I1222 22:56:42.750262  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:42.750629  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:43.250421  146734 type.go:165] "Request Body" body=""
	I1222 22:56:43.250500  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:43.250876  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:43.250943  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:43.749662  146734 type.go:165] "Request Body" body=""
	I1222 22:56:43.749749  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:43.750068  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:44.249733  146734 type.go:165] "Request Body" body=""
	I1222 22:56:44.249816  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:44.250156  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:44.749892  146734 type.go:165] "Request Body" body=""
	I1222 22:56:44.749976  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:44.750375  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:45.249835  146734 type.go:165] "Request Body" body=""
	I1222 22:56:45.249925  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:45.250244  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:45.750013  146734 type.go:165] "Request Body" body=""
	I1222 22:56:45.750091  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:45.750446  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:45.750514  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:46.250285  146734 type.go:165] "Request Body" body=""
	I1222 22:56:46.250360  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:46.250755  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:46.749559  146734 type.go:165] "Request Body" body=""
	I1222 22:56:46.749670  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:46.750010  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:47.249802  146734 type.go:165] "Request Body" body=""
	I1222 22:56:47.249878  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:47.250237  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:47.749902  146734 type.go:165] "Request Body" body=""
	I1222 22:56:47.750019  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:47.750394  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:48.250358  146734 type.go:165] "Request Body" body=""
	I1222 22:56:48.250462  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:48.250882  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:48.250970  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:48.749733  146734 type.go:165] "Request Body" body=""
	I1222 22:56:48.749822  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:48.750138  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:49.249857  146734 type.go:165] "Request Body" body=""
	I1222 22:56:49.249934  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:49.250272  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:49.750058  146734 type.go:165] "Request Body" body=""
	I1222 22:56:49.750161  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:49.750546  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:50.250534  146734 type.go:165] "Request Body" body=""
	I1222 22:56:50.250637  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:50.250970  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:50.251043  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:50.749768  146734 type.go:165] "Request Body" body=""
	I1222 22:56:50.749857  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:50.750193  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:51.250007  146734 type.go:165] "Request Body" body=""
	I1222 22:56:51.250097  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:51.250480  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:51.750285  146734 type.go:165] "Request Body" body=""
	I1222 22:56:51.750367  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:51.750694  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:52.250507  146734 type.go:165] "Request Body" body=""
	I1222 22:56:52.250580  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:52.250924  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:52.749846  146734 type.go:165] "Request Body" body=""
	I1222 22:56:52.749917  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:52.750231  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:52.750289  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:53.250037  146734 type.go:165] "Request Body" body=""
	I1222 22:56:53.250116  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:53.250450  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:53.750301  146734 type.go:165] "Request Body" body=""
	I1222 22:56:53.750379  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:53.750711  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:54.250570  146734 type.go:165] "Request Body" body=""
	I1222 22:56:54.250683  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:54.251037  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:54.749778  146734 type.go:165] "Request Body" body=""
	I1222 22:56:54.749870  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:54.750212  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:55.249925  146734 type.go:165] "Request Body" body=""
	I1222 22:56:55.250005  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:55.250325  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:55.250383  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:55.750056  146734 type.go:165] "Request Body" body=""
	I1222 22:56:55.750129  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:55.750431  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:56.250241  146734 type.go:165] "Request Body" body=""
	I1222 22:56:56.250315  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:56.250691  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:56.750506  146734 type.go:165] "Request Body" body=""
	I1222 22:56:56.750582  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:56.750942  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:57.249670  146734 type.go:165] "Request Body" body=""
	I1222 22:56:57.249740  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:57.250040  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:57.749841  146734 type.go:165] "Request Body" body=""
	I1222 22:56:57.749915  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:57.750251  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:57.750326  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:58.249993  146734 type.go:165] "Request Body" body=""
	I1222 22:56:58.250069  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:58.250401  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:58.749827  146734 type.go:165] "Request Body" body=""
	I1222 22:56:58.749905  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:58.750192  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:59.249907  146734 type.go:165] "Request Body" body=""
	I1222 22:56:59.249979  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:59.250306  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:59.750056  146734 type.go:165] "Request Body" body=""
	I1222 22:56:59.750136  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:59.750491  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:59.750565  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:57:00.250323  146734 type.go:165] "Request Body" body=""
	I1222 22:57:00.250408  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:00.250755  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:00.749631  146734 type.go:165] "Request Body" body=""
	I1222 22:57:00.749707  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:00.750021  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:01.249677  146734 type.go:165] "Request Body" body=""
	I1222 22:57:01.249762  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:01.250096  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:01.749829  146734 type.go:165] "Request Body" body=""
	I1222 22:57:01.749919  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:01.750234  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:02.249941  146734 type.go:165] "Request Body" body=""
	I1222 22:57:02.250014  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:02.250348  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:57:02.250421  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:57:02.750212  146734 type.go:165] "Request Body" body=""
	I1222 22:57:02.750300  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:02.750654  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:03.250450  146734 type.go:165] "Request Body" body=""
	I1222 22:57:03.250517  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:03.250851  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:03.749576  146734 type.go:165] "Request Body" body=""
	I1222 22:57:03.749665  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:03.749988  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:04.249767  146734 type.go:165] "Request Body" body=""
	I1222 22:57:04.249842  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:04.250173  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:04.750033  146734 type.go:165] "Request Body" body=""
	I1222 22:57:04.750117  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:04.750502  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:57:04.750569  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:57:05.250312  146734 type.go:165] "Request Body" body=""
	I1222 22:57:05.250398  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:05.250762  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:05.750575  146734 type.go:165] "Request Body" body=""
	I1222 22:57:05.750666  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:05.751012  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:06.249706  146734 type.go:165] "Request Body" body=""
	I1222 22:57:06.249781  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:06.250093  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:06.749823  146734 type.go:165] "Request Body" body=""
	I1222 22:57:06.749898  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:06.750282  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:07.250051  146734 type.go:165] "Request Body" body=""
	I1222 22:57:07.250124  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:07.250473  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:57:07.250533  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:57:07.750214  146734 type.go:165] "Request Body" body=""
	I1222 22:57:07.750298  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:07.750580  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:08.250395  146734 type.go:165] "Request Body" body=""
	I1222 22:57:08.250486  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:08.250871  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:08.749688  146734 type.go:165] "Request Body" body=""
	I1222 22:57:08.749763  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:08.750089  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:09.249707  146734 type.go:165] "Request Body" body=""
	I1222 22:57:09.249788  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:09.250145  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:09.749900  146734 type.go:165] "Request Body" body=""
	I1222 22:57:09.749983  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:09.750354  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:57:09.750431  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:57:10.250176  146734 type.go:165] "Request Body" body=""
	I1222 22:57:10.250252  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:10.250587  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:10.750397  146734 type.go:165] "Request Body" body=""
	I1222 22:57:10.750472  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:10.750832  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:11.249645  146734 type.go:165] "Request Body" body=""
	I1222 22:57:11.249739  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:11.250107  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:11.749864  146734 type.go:165] "Request Body" body=""
	I1222 22:57:11.749962  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:11.750316  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:12.249663  146734 type.go:165] "Request Body" body=""
	I1222 22:57:12.249748  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:12.250105  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:57:12.250178  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:57:12.750096  146734 type.go:165] "Request Body" body=""
	I1222 22:57:12.750174  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:12.750521  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:13.250403  146734 type.go:165] "Request Body" body=""
	I1222 22:57:13.250481  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:13.250854  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:13.749624  146734 type.go:165] "Request Body" body=""
	I1222 22:57:13.749717  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:13.750062  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:14.249768  146734 type.go:165] "Request Body" body=""
	I1222 22:57:14.249842  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:14.250173  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:57:14.250237  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:57:14.749931  146734 type.go:165] "Request Body" body=""
	I1222 22:57:14.750016  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:14.750331  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:15.250077  146734 type.go:165] "Request Body" body=""
	I1222 22:57:15.250149  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:15.250459  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:15.750281  146734 type.go:165] "Request Body" body=""
	I1222 22:57:15.750366  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:15.750687  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:16.250490  146734 type.go:165] "Request Body" body=""
	I1222 22:57:16.250582  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:16.250949  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:57:16.251012  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:57:16.749742  146734 type.go:165] "Request Body" body=""
	I1222 22:57:16.749829  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:16.750173  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:17.249915  146734 type.go:165] "Request Body" body=""
	I1222 22:57:17.250011  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:17.250323  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:17.750089  146734 type.go:165] "Request Body" body=""
	I1222 22:57:17.750167  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:17.750505  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:18.250322  146734 type.go:165] "Request Body" body=""
	I1222 22:57:18.250402  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:18.250802  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:18.749588  146734 type.go:165] "Request Body" body=""
	I1222 22:57:18.749719  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:18.750078  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:57:18.750142  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:57:19.249857  146734 type.go:165] "Request Body" body=""
	I1222 22:57:19.249933  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:19.250256  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:19.749770  146734 type.go:165] "Request Body" body=""
	I1222 22:57:19.749854  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:19.750208  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:20.249748  146734 type.go:165] "Request Body" body=""
	I1222 22:57:20.249831  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:20.250157  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:20.750100  146734 type.go:165] "Request Body" body=""
	I1222 22:57:20.750194  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:20.750870  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:57:20.750943  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:57:21.249638  146734 type.go:165] "Request Body" body=""
	I1222 22:57:21.249718  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:21.250054  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:21.749626  146734 type.go:165] "Request Body" body=""
	I1222 22:57:21.749701  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:21.750025  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:22.249683  146734 type.go:165] "Request Body" body=""
	I1222 22:57:22.249766  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:22.250110  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:22.750117  146734 node_ready.go:38] duration metric: took 6m0.000675026s for node "functional-384766" to be "Ready" ...
	I1222 22:57:22.752685  146734 out.go:203] 
	W1222 22:57:22.753745  146734 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1222 22:57:22.753760  146734 out.go:285] * 
	W1222 22:57:22.753991  146734 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 22:57:22.755053  146734 out.go:203] 
	
	
	==> Docker <==
	Dec 22 22:51:20 functional-384766 dockerd[9922]: time="2025-12-22T22:51:20.767074805Z" level=info msg="Loading containers: done."
	Dec 22 22:51:20 functional-384766 dockerd[9922]: time="2025-12-22T22:51:20.776636662Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 22 22:51:20 functional-384766 dockerd[9922]: time="2025-12-22T22:51:20.776667996Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 22 22:51:20 functional-384766 dockerd[9922]: time="2025-12-22T22:51:20.776702670Z" level=info msg="Initializing buildkit"
	Dec 22 22:51:20 functional-384766 dockerd[9922]: time="2025-12-22T22:51:20.795232210Z" level=info msg="Completed buildkit initialization"
	Dec 22 22:51:20 functional-384766 dockerd[9922]: time="2025-12-22T22:51:20.799403213Z" level=info msg="Daemon has completed initialization"
	Dec 22 22:51:20 functional-384766 dockerd[9922]: time="2025-12-22T22:51:20.799466264Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 22 22:51:20 functional-384766 dockerd[9922]: time="2025-12-22T22:51:20.799532589Z" level=info msg="API listen on [::]:2376"
	Dec 22 22:51:20 functional-384766 dockerd[9922]: time="2025-12-22T22:51:20.799493974Z" level=info msg="API listen on /run/docker.sock"
	Dec 22 22:51:20 functional-384766 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 22 22:51:20 functional-384766 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 22 22:51:20 functional-384766 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 22 22:51:20 functional-384766 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 22 22:51:21 functional-384766 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 22 22:51:21 functional-384766 cri-dockerd[10237]: time="2025-12-22T22:51:21Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 22 22:51:21 functional-384766 cri-dockerd[10237]: time="2025-12-22T22:51:21Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 22 22:51:21 functional-384766 cri-dockerd[10237]: time="2025-12-22T22:51:21Z" level=info msg="Start docker client with request timeout 0s"
	Dec 22 22:51:21 functional-384766 cri-dockerd[10237]: time="2025-12-22T22:51:21Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 22 22:51:21 functional-384766 cri-dockerd[10237]: time="2025-12-22T22:51:21Z" level=info msg="Loaded network plugin cni"
	Dec 22 22:51:21 functional-384766 cri-dockerd[10237]: time="2025-12-22T22:51:21Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 22 22:51:21 functional-384766 cri-dockerd[10237]: time="2025-12-22T22:51:21Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 22 22:51:21 functional-384766 cri-dockerd[10237]: time="2025-12-22T22:51:21Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 22 22:51:21 functional-384766 cri-dockerd[10237]: time="2025-12-22T22:51:21Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 22 22:51:21 functional-384766 cri-dockerd[10237]: time="2025-12-22T22:51:21Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 22 22:51:21 functional-384766 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:57:33.637333   17333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:57:33.637911   17333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:57:33.639495   17333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:57:33.639986   17333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:57:33.641511   17333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff da 9e 7f a3 27 cb 08 06
	[  +0.239045] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6e eb f7 fd 0a 48 08 06
	[  +0.170967] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 16 5a dc 65 fc cc 08 06
	[Dec22 22:37] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 66 cb ee 90 55 2b 08 06
	[  +0.000450] IPv4: martian source 10.244.0.32 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff be 43 50 0c dd 15 08 06
	[  +0.000658] IPv4: martian source 10.244.0.32 from 10.244.0.7, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e 41 3c 76 23 2b 08 06
	[  +1.709294] IPv4: martian source 10.244.0.31 from 10.244.0.26, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff be b6 30 85 5f 4e 08 06
	[  +0.532867] IPv4: martian source 10.244.0.26 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff be 43 50 0c dd 15 08 06
	[Dec22 22:39] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 46 b7 49 09 f9 e0 08 06
	[  +0.006417] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1e e5 c5 4f 67 2b 08 06
	[Dec22 22:40] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 22 2e 10 70 70 25 08 06
	[Dec22 22:41] IPv4: martian source 10.244.0.1 from 10.244.0.6, on dev eth0
	[  +0.000034] ll header: 00000000: ff ff ff ff ff ff ee d7 ae 32 ba c5 08 06
	[Dec22 22:42] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 95 cb 2f 8e 91 08 06
	
	
	==> kernel <==
	 22:57:33 up  2:39,  0 user,  load average: 0.77, 0.34, 0.57
	Linux functional-384766 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 22:57:30 functional-384766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 22:57:30 functional-384766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 820.
	Dec 22 22:57:30 functional-384766 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 22:57:30 functional-384766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 22:57:31 functional-384766 kubelet[17017]: E1222 22:57:31.037693   17017 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 22:57:31 functional-384766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 22:57:31 functional-384766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 22:57:31 functional-384766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 821.
	Dec 22 22:57:31 functional-384766 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 22:57:31 functional-384766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 22:57:31 functional-384766 kubelet[17083]: E1222 22:57:31.785399   17083 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 22:57:31 functional-384766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 22:57:31 functional-384766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 22:57:32 functional-384766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 822.
	Dec 22 22:57:32 functional-384766 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 22:57:32 functional-384766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 22:57:32 functional-384766 kubelet[17199]: E1222 22:57:32.543718   17199 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 22:57:32 functional-384766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 22:57:32 functional-384766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 22:57:33 functional-384766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 823.
	Dec 22 22:57:33 functional-384766 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 22:57:33 functional-384766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 22:57:33 functional-384766 kubelet[17225]: E1222 22:57:33.286472   17225 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 22:57:33 functional-384766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 22:57:33 functional-384766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-384766 -n functional-384766
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-384766 -n functional-384766: exit status 2 (298.342229ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-384766" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd (1.84s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly (1.85s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-384766 get pods
functional_test.go:756: (dbg) Non-zero exit: out/kubectl --context functional-384766 get pods: exit status 1 (103.522376ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:759: failed to run kubectl directly. args "out/kubectl --context functional-384766 get pods": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-384766
helpers_test.go:244: (dbg) docker inspect functional-384766:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c",
	        "Created": "2025-12-22T22:43:03.818900502Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 134904,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T22:43:03.847527913Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9a87e850a5e640dd3e5f71477885272b970ba271e3722be8bebbe0157f704ffd",
	        "ResolvConfPath": "/var/lib/docker/containers/e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c/hostname",
	        "HostsPath": "/var/lib/docker/containers/e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c/hosts",
	        "LogPath": "/var/lib/docker/containers/e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c/e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c-json.log",
	        "Name": "/functional-384766",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-384766:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-384766",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c",
	                "LowerDir": "/var/lib/docker/overlay2/3e3d10c0ae87018d46767d6a2bb62611a8b9a288f6938e75c60f3cd57119d4bf-init/diff:/var/lib/docker/overlay2/c57dd1a41102d99c4ed6be3c60b871435428bd2cea6a3d8d172f0a67527ba009/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3e3d10c0ae87018d46767d6a2bb62611a8b9a288f6938e75c60f3cd57119d4bf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3e3d10c0ae87018d46767d6a2bb62611a8b9a288f6938e75c60f3cd57119d4bf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3e3d10c0ae87018d46767d6a2bb62611a8b9a288f6938e75c60f3cd57119d4bf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-384766",
	                "Source": "/var/lib/docker/volumes/functional-384766/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-384766",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-384766",
	                "name.minikube.sigs.k8s.io": "functional-384766",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d6f65d275ad1e1cfaea153f23b0c094464e089c27de9a12387045fa2c863e00e",
	            "SandboxKey": "/var/run/docker/netns/d6f65d275ad1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-384766": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1b177601c4f3a252e4feb1553da3a4110e40d5b9ed2bd5de6789f2bc9f8f5c2b",
	                    "EndpointID": "2c787f98c5d836612c102f7592dc2eccfef09327c2a6cadf1319fd6559b5eca8",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "d6:90:04:78:9b:e3",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-384766",
	                        "e126b999cc06"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-384766 -n functional-384766
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-384766 -n functional-384766: exit status 2 (314.988238ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-580825 ssh pgrep buildkitd                                                                                                             │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │                     │
	│ image   │ functional-580825 image ls --format json --alsologtostderr                                                                                        │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image   │ functional-580825 image ls --format short --alsologtostderr                                                                                       │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │                     │
	│ image   │ functional-580825 image ls --format yaml --alsologtostderr                                                                                        │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image   │ functional-580825 image ls --format table --alsologtostderr                                                                                       │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image   │ functional-580825 image build -t localhost/my-image:functional-580825 testdata/build --alsologtostderr                                            │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image   │ functional-580825 image ls                                                                                                                        │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ delete  │ -p functional-580825                                                                                                                              │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ start   │ -p functional-384766 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-rc.1 │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │                     │
	│ start   │ -p functional-384766 --alsologtostderr -v=8                                                                                                       │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 22:51 UTC │                     │
	│ cache   │ functional-384766 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 22:57 UTC │ 22 Dec 25 22:57 UTC │
	│ cache   │ functional-384766 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 22:57 UTC │ 22 Dec 25 22:57 UTC │
	│ cache   │ functional-384766 cache add registry.k8s.io/pause:latest                                                                                          │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 22:57 UTC │ 22 Dec 25 22:57 UTC │
	│ cache   │ functional-384766 cache add minikube-local-cache-test:functional-384766                                                                           │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 22:57 UTC │ 22 Dec 25 22:57 UTC │
	│ cache   │ functional-384766 cache delete minikube-local-cache-test:functional-384766                                                                        │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 22:57 UTC │ 22 Dec 25 22:57 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 22 Dec 25 22:57 UTC │ 22 Dec 25 22:57 UTC │
	│ cache   │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 22 Dec 25 22:57 UTC │ 22 Dec 25 22:57 UTC │
	│ ssh     │ functional-384766 ssh sudo crictl images                                                                                                          │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 22:57 UTC │ 22 Dec 25 22:57 UTC │
	│ ssh     │ functional-384766 ssh sudo docker rmi registry.k8s.io/pause:latest                                                                                │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 22:57 UTC │ 22 Dec 25 22:57 UTC │
	│ ssh     │ functional-384766 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 22:57 UTC │                     │
	│ cache   │ functional-384766 cache reload                                                                                                                    │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 22:57 UTC │ 22 Dec 25 22:57 UTC │
	│ ssh     │ functional-384766 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 22:57 UTC │ 22 Dec 25 22:57 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 22 Dec 25 22:57 UTC │ 22 Dec 25 22:57 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 22 Dec 25 22:57 UTC │ 22 Dec 25 22:57 UTC │
	│ kubectl │ functional-384766 kubectl -- --context functional-384766 get pods                                                                                 │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 22:57 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 22:51:17
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 22:51:17.565426  146734 out.go:360] Setting OutFile to fd 1 ...
	I1222 22:51:17.565716  146734 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 22:51:17.565727  146734 out.go:374] Setting ErrFile to fd 2...
	I1222 22:51:17.565732  146734 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 22:51:17.565972  146734 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-72233/.minikube/bin
	I1222 22:51:17.566463  146734 out.go:368] Setting JSON to false
	I1222 22:51:17.567434  146734 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":9218,"bootTime":1766434660,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1222 22:51:17.567486  146734 start.go:143] virtualization: kvm guest
	I1222 22:51:17.569465  146734 out.go:179] * [functional-384766] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1222 22:51:17.570460  146734 out.go:179]   - MINIKUBE_LOCATION=22301
	I1222 22:51:17.570465  146734 notify.go:221] Checking for updates...
	I1222 22:51:17.572456  146734 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 22:51:17.573608  146734 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1222 22:51:17.574791  146734 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-72233/.minikube
	I1222 22:51:17.575840  146734 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1222 22:51:17.576824  146734 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 22:51:17.578279  146734 config.go:182] Loaded profile config "functional-384766": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1222 22:51:17.578404  146734 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 22:51:17.602058  146734 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1222 22:51:17.602223  146734 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 22:51:17.652786  146734 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-12-22 22:51:17.644025132 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1222 22:51:17.652901  146734 docker.go:319] overlay module found
	I1222 22:51:17.655127  146734 out.go:179] * Using the docker driver based on existing profile
	I1222 22:51:17.656150  146734 start.go:309] selected driver: docker
	I1222 22:51:17.656165  146734 start.go:928] validating driver "docker" against &{Name:functional-384766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-384766 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 22:51:17.656249  146734 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 22:51:17.656337  146734 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 22:51:17.716062  146734 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-12-22 22:51:17.707507925 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1222 22:51:17.716919  146734 cni.go:84] Creating CNI manager for ""
	I1222 22:51:17.717012  146734 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1222 22:51:17.717085  146734 start.go:353] cluster config:
	{Name:functional-384766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-384766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 22:51:17.719515  146734 out.go:179] * Starting "functional-384766" primary control-plane node in "functional-384766" cluster
	I1222 22:51:17.720631  146734 cache.go:134] Beginning downloading kic base image for docker with docker
	I1222 22:51:17.721792  146734 out.go:179] * Pulling base image v0.0.48-1766394456-22288 ...
	I1222 22:51:17.723064  146734 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1222 22:51:17.723095  146734 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4
	I1222 22:51:17.723112  146734 cache.go:65] Caching tarball of preloaded images
	I1222 22:51:17.723172  146734 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local docker daemon
	I1222 22:51:17.723191  146734 preload.go:251] Found /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1222 22:51:17.723198  146734 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on docker
	I1222 22:51:17.723299  146734 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/config.json ...
	I1222 22:51:17.742349  146734 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local docker daemon, skipping pull
	I1222 22:51:17.742368  146734 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 exists in daemon, skipping load
	I1222 22:51:17.742396  146734 cache.go:243] Successfully downloaded all kic artifacts
	I1222 22:51:17.742444  146734 start.go:360] acquireMachinesLock for functional-384766: {Name:mk956fe60c71d3d96aa218ecf73d6e39f6ab1bf3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 22:51:17.742506  146734 start.go:364] duration metric: took 41.881µs to acquireMachinesLock for "functional-384766"
	I1222 22:51:17.742535  146734 start.go:96] Skipping create...Using existing machine configuration
	I1222 22:51:17.742545  146734 fix.go:54] fixHost starting: 
	I1222 22:51:17.742810  146734 cli_runner.go:164] Run: docker container inspect functional-384766 --format={{.State.Status}}
	I1222 22:51:17.759507  146734 fix.go:112] recreateIfNeeded on functional-384766: state=Running err=<nil>
	W1222 22:51:17.759531  146734 fix.go:138] unexpected machine state, will restart: <nil>
	I1222 22:51:17.761090  146734 out.go:252] * Updating the running docker "functional-384766" container ...
	I1222 22:51:17.761123  146734 machine.go:94] provisionDockerMachine start ...
	I1222 22:51:17.761180  146734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:51:17.778682  146734 main.go:144] libmachine: Using SSH client type: native
	I1222 22:51:17.778900  146734 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:51:17.778912  146734 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 22:51:17.919326  146734 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-384766
	
	I1222 22:51:17.919369  146734 ubuntu.go:182] provisioning hostname "functional-384766"
	I1222 22:51:17.919431  146734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:51:17.936992  146734 main.go:144] libmachine: Using SSH client type: native
	I1222 22:51:17.937221  146734 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:51:17.937234  146734 main.go:144] libmachine: About to run SSH command:
	sudo hostname functional-384766 && echo "functional-384766" | sudo tee /etc/hostname
	I1222 22:51:18.086470  146734 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-384766
	
	I1222 22:51:18.086564  146734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:51:18.104748  146734 main.go:144] libmachine: Using SSH client type: native
	I1222 22:51:18.105051  146734 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:51:18.105077  146734 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-384766' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-384766/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-384766' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 22:51:18.246730  146734 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 22:51:18.246760  146734 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22301-72233/.minikube CaCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22301-72233/.minikube}
	I1222 22:51:18.246782  146734 ubuntu.go:190] setting up certificates
	I1222 22:51:18.246792  146734 provision.go:84] configureAuth start
	I1222 22:51:18.246854  146734 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-384766
	I1222 22:51:18.265782  146734 provision.go:143] copyHostCerts
	I1222 22:51:18.265828  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem
	I1222 22:51:18.265879  146734 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem, removing ...
	I1222 22:51:18.265900  146734 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem
	I1222 22:51:18.266005  146734 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem (1082 bytes)
	I1222 22:51:18.266139  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem
	I1222 22:51:18.266163  146734 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem, removing ...
	I1222 22:51:18.266175  146734 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem
	I1222 22:51:18.266220  146734 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem (1123 bytes)
	I1222 22:51:18.266317  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem
	I1222 22:51:18.266344  146734 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem, removing ...
	I1222 22:51:18.266355  146734 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem
	I1222 22:51:18.266400  146734 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem (1679 bytes)
	I1222 22:51:18.266499  146734 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem org=jenkins.functional-384766 san=[127.0.0.1 192.168.49.2 functional-384766 localhost minikube]
	I1222 22:51:18.330118  146734 provision.go:177] copyRemoteCerts
	I1222 22:51:18.330177  146734 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 22:51:18.330210  146734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:51:18.347420  146734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
	I1222 22:51:18.447556  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1222 22:51:18.447646  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 22:51:18.464129  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1222 22:51:18.464180  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 22:51:18.480702  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1222 22:51:18.480757  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1222 22:51:18.496998  146734 provision.go:87] duration metric: took 250.195084ms to configureAuth
	I1222 22:51:18.497021  146734 ubuntu.go:206] setting minikube options for container-runtime
	I1222 22:51:18.497168  146734 config.go:182] Loaded profile config "functional-384766": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1222 22:51:18.497218  146734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:51:18.514380  146734 main.go:144] libmachine: Using SSH client type: native
	I1222 22:51:18.514623  146734 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:51:18.514636  146734 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1222 22:51:18.655354  146734 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1222 22:51:18.655383  146734 ubuntu.go:71] root file system type: overlay
	I1222 22:51:18.655533  146734 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1222 22:51:18.655634  146734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:51:18.673540  146734 main.go:144] libmachine: Using SSH client type: native
	I1222 22:51:18.673819  146734 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:51:18.673915  146734 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1222 22:51:18.823487  146734 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1222 22:51:18.823601  146734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:51:18.841347  146734 main.go:144] libmachine: Using SSH client type: native
	I1222 22:51:18.841608  146734 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:51:18.841639  146734 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1222 22:51:18.987007  146734 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 22:51:18.987042  146734 machine.go:97] duration metric: took 1.225905804s to provisionDockerMachine
	I1222 22:51:18.987059  146734 start.go:293] postStartSetup for "functional-384766" (driver="docker")
	I1222 22:51:18.987075  146734 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 22:51:18.987145  146734 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 22:51:18.987199  146734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:51:19.006696  146734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
	I1222 22:51:19.107530  146734 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 22:51:19.110931  146734 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1222 22:51:19.110952  146734 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1222 22:51:19.110959  146734 command_runner.go:130] > VERSION_ID="12"
	I1222 22:51:19.110964  146734 command_runner.go:130] > VERSION="12 (bookworm)"
	I1222 22:51:19.110979  146734 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1222 22:51:19.110985  146734 command_runner.go:130] > ID=debian
	I1222 22:51:19.110992  146734 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1222 22:51:19.111000  146734 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1222 22:51:19.111012  146734 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1222 22:51:19.111100  146734 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 22:51:19.111124  146734 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 22:51:19.111137  146734 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-72233/.minikube/addons for local assets ...
	I1222 22:51:19.111205  146734 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-72233/.minikube/files for local assets ...
	I1222 22:51:19.111317  146734 filesync.go:149] local asset: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem -> 758032.pem in /etc/ssl/certs
	I1222 22:51:19.111330  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem -> /etc/ssl/certs/758032.pem
	I1222 22:51:19.111426  146734 filesync.go:149] local asset: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/test/nested/copy/75803/hosts -> hosts in /etc/test/nested/copy/75803
	I1222 22:51:19.111434  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/test/nested/copy/75803/hosts -> /etc/test/nested/copy/75803/hosts
	I1222 22:51:19.111495  146734 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/75803
	I1222 22:51:19.119122  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem --> /etc/ssl/certs/758032.pem (1708 bytes)
	I1222 22:51:19.135900  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/test/nested/copy/75803/hosts --> /etc/test/nested/copy/75803/hosts (40 bytes)
	I1222 22:51:19.152438  146734 start.go:296] duration metric: took 165.360222ms for postStartSetup
	I1222 22:51:19.152512  146734 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 22:51:19.152568  146734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:51:19.170181  146734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
	I1222 22:51:19.267525  146734 command_runner.go:130] > 37%
	I1222 22:51:19.267628  146734 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 22:51:19.272133  146734 command_runner.go:130] > 185G
	I1222 22:51:19.272164  146734 fix.go:56] duration metric: took 1.529618595s for fixHost
	I1222 22:51:19.272178  146734 start.go:83] releasing machines lock for "functional-384766", held for 1.529658247s
	I1222 22:51:19.272243  146734 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-384766
	I1222 22:51:19.290506  146734 ssh_runner.go:195] Run: cat /version.json
	I1222 22:51:19.290562  146734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:51:19.290583  146734 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 22:51:19.290685  146734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:51:19.307884  146734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
	I1222 22:51:19.308688  146734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
	I1222 22:51:19.461522  146734 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1222 22:51:19.463216  146734 command_runner.go:130] > {"iso_version": "v1.37.0-1766254259-22261", "kicbase_version": "v0.0.48-1766394456-22288", "minikube_version": "v1.37.0", "commit": "069cfc84263169a672fdad8d37486b5cb35673ac"}
	I1222 22:51:19.463366  146734 ssh_runner.go:195] Run: systemctl --version
	I1222 22:51:19.469697  146734 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1222 22:51:19.469761  146734 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1222 22:51:19.469847  146734 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1222 22:51:19.474292  146734 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1222 22:51:19.474367  146734 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 22:51:19.474416  146734 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 22:51:19.482031  146734 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1222 22:51:19.482056  146734 start.go:496] detecting cgroup driver to use...
	I1222 22:51:19.482091  146734 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 22:51:19.482215  146734 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 22:51:19.495227  146734 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1222 22:51:19.495298  146734 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1222 22:51:19.503438  146734 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1222 22:51:19.511525  146734 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1222 22:51:19.511574  146734 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1222 22:51:19.519676  146734 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 22:51:19.527517  146734 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1222 22:51:19.535615  146734 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 22:51:19.543569  146734 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 22:51:19.550965  146734 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1222 22:51:19.559037  146734 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1222 22:51:19.567079  146734 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1222 22:51:19.575222  146734 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 22:51:19.582102  146734 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1222 22:51:19.582154  146734 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 22:51:19.588882  146734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 22:51:19.668907  146734 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1222 22:51:19.740882  146734 start.go:496] detecting cgroup driver to use...
	I1222 22:51:19.740926  146734 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 22:51:19.740967  146734 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1222 22:51:19.753727  146734 command_runner.go:130] > # /lib/systemd/system/docker.service
	I1222 22:51:19.753762  146734 command_runner.go:130] > [Unit]
	I1222 22:51:19.753770  146734 command_runner.go:130] > Description=Docker Application Container Engine
	I1222 22:51:19.753778  146734 command_runner.go:130] > Documentation=https://docs.docker.com
	I1222 22:51:19.753787  146734 command_runner.go:130] > After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	I1222 22:51:19.753797  146734 command_runner.go:130] > Wants=network-online.target containerd.service
	I1222 22:51:19.753808  146734 command_runner.go:130] > Requires=docker.socket
	I1222 22:51:19.753815  146734 command_runner.go:130] > StartLimitBurst=3
	I1222 22:51:19.753825  146734 command_runner.go:130] > StartLimitIntervalSec=60
	I1222 22:51:19.753833  146734 command_runner.go:130] > [Service]
	I1222 22:51:19.753841  146734 command_runner.go:130] > Type=notify
	I1222 22:51:19.753848  146734 command_runner.go:130] > Restart=always
	I1222 22:51:19.753862  146734 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1222 22:51:19.753882  146734 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1222 22:51:19.753896  146734 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1222 22:51:19.753910  146734 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1222 22:51:19.753923  146734 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1222 22:51:19.753937  146734 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1222 22:51:19.753952  146734 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1222 22:51:19.753969  146734 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1222 22:51:19.753983  146734 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1222 22:51:19.753991  146734 command_runner.go:130] > ExecStart=
	I1222 22:51:19.754018  146734 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I1222 22:51:19.754031  146734 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1222 22:51:19.754046  146734 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1222 22:51:19.754060  146734 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1222 22:51:19.754067  146734 command_runner.go:130] > LimitNOFILE=infinity
	I1222 22:51:19.754076  146734 command_runner.go:130] > LimitNPROC=infinity
	I1222 22:51:19.754084  146734 command_runner.go:130] > LimitCORE=infinity
	I1222 22:51:19.754095  146734 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1222 22:51:19.754107  146734 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1222 22:51:19.754115  146734 command_runner.go:130] > TasksMax=infinity
	I1222 22:51:19.754124  146734 command_runner.go:130] > TimeoutStartSec=0
	I1222 22:51:19.754137  146734 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1222 22:51:19.754152  146734 command_runner.go:130] > Delegate=yes
	I1222 22:51:19.754162  146734 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1222 22:51:19.754171  146734 command_runner.go:130] > KillMode=process
	I1222 22:51:19.754179  146734 command_runner.go:130] > OOMScoreAdjust=-500
	I1222 22:51:19.754187  146734 command_runner.go:130] > [Install]
	I1222 22:51:19.754196  146734 command_runner.go:130] > WantedBy=multi-user.target
	I1222 22:51:19.754834  146734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 22:51:19.766639  146734 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1222 22:51:19.781290  146734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 22:51:19.792290  146734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1222 22:51:19.803490  146734 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 22:51:19.815697  146734 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1222 22:51:19.816642  146734 ssh_runner.go:195] Run: which cri-dockerd
	I1222 22:51:19.820210  146734 command_runner.go:130] > /usr/bin/cri-dockerd
	I1222 22:51:19.820315  146734 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1222 22:51:19.827693  146734 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1222 22:51:19.839649  146734 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1222 22:51:19.921176  146734 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1222 22:51:20.004043  146734 docker.go:578] configuring docker to use "cgroupfs" as cgroup driver...
	I1222 22:51:20.004160  146734 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1222 22:51:20.017007  146734 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1222 22:51:20.028524  146734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 22:51:20.107815  146734 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1222 22:51:20.801234  146734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 22:51:20.813428  146734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1222 22:51:20.824782  146734 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1222 22:51:20.839450  146734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1222 22:51:20.850829  146734 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1222 22:51:20.931099  146734 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1222 22:51:21.012149  146734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 22:51:21.092742  146734 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1222 22:51:21.120647  146734 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1222 22:51:21.132196  146734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 22:51:21.256485  146734 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1222 22:51:21.327564  146734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1222 22:51:21.340042  146734 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1222 22:51:21.340117  146734 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1222 22:51:21.343842  146734 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1222 22:51:21.343869  146734 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1222 22:51:21.343877  146734 command_runner.go:130] > Device: 0,75	Inode: 1744        Links: 1
	I1222 22:51:21.343888  146734 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  997/  docker)
	I1222 22:51:21.343895  146734 command_runner.go:130] > Access: 2025-12-22 22:51:21.266838495 +0000
	I1222 22:51:21.343909  146734 command_runner.go:130] > Modify: 2025-12-22 22:51:21.266838495 +0000
	I1222 22:51:21.343924  146734 command_runner.go:130] > Change: 2025-12-22 22:51:21.279839753 +0000
	I1222 22:51:21.343935  146734 command_runner.go:130] >  Birth: 2025-12-22 22:51:21.266838495 +0000
	I1222 22:51:21.343976  146734 start.go:564] Will wait 60s for crictl version
	I1222 22:51:21.344020  146734 ssh_runner.go:195] Run: which crictl
	I1222 22:51:21.347282  146734 command_runner.go:130] > /usr/local/bin/crictl
	I1222 22:51:21.347341  146734 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 22:51:21.370719  146734 command_runner.go:130] > Version:  0.1.0
	I1222 22:51:21.370739  146734 command_runner.go:130] > RuntimeName:  docker
	I1222 22:51:21.370743  146734 command_runner.go:130] > RuntimeVersion:  29.1.3
	I1222 22:51:21.370748  146734 command_runner.go:130] > RuntimeApiVersion:  v1
	I1222 22:51:21.370764  146734 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1222 22:51:21.370812  146734 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1222 22:51:21.395767  146734 command_runner.go:130] > 29.1.3
	I1222 22:51:21.395836  146734 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1222 22:51:21.418820  146734 command_runner.go:130] > 29.1.3
	I1222 22:51:21.422122  146734 out.go:252] * Preparing Kubernetes v1.35.0-rc.1 on Docker 29.1.3 ...
	I1222 22:51:21.422206  146734 cli_runner.go:164] Run: docker network inspect functional-384766 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 22:51:21.439338  146734 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1222 22:51:21.443526  146734 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1222 22:51:21.443628  146734 kubeadm.go:884] updating cluster {Name:functional-384766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-384766 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1222 22:51:21.443753  146734 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1222 22:51:21.443822  146734 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1222 22:51:21.464281  146734 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1222 22:51:21.464308  146734 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1222 22:51:21.464318  146734 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1222 22:51:21.464325  146734 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1222 22:51:21.464332  146734 command_runner.go:130] > registry.k8s.io/etcd:3.6.6-0
	I1222 22:51:21.464340  146734 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1222 22:51:21.464348  146734 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1222 22:51:21.464366  146734 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 22:51:21.464395  146734 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1222 22:51:21.464407  146734 docker.go:624] Images already preloaded, skipping extraction
	I1222 22:51:21.464455  146734 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1222 22:51:21.482666  146734 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1222 22:51:21.482684  146734 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1222 22:51:21.482690  146734 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1222 22:51:21.482697  146734 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1222 22:51:21.482704  146734 command_runner.go:130] > registry.k8s.io/etcd:3.6.6-0
	I1222 22:51:21.482712  146734 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1222 22:51:21.482729  146734 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1222 22:51:21.482739  146734 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 22:51:21.483998  146734 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1222 22:51:21.484022  146734 cache_images.go:86] Images are preloaded, skipping loading
	I1222 22:51:21.484036  146734 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 docker true true} ...
	I1222 22:51:21.484172  146734 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-384766 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-384766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 22:51:21.484238  146734 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1222 22:51:21.532066  146734 command_runner.go:130] > cgroupfs
	I1222 22:51:21.533783  146734 cni.go:84] Creating CNI manager for ""
	I1222 22:51:21.533808  146734 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1222 22:51:21.533825  146734 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 22:51:21.533845  146734 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-384766 NodeName:functional-384766 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 22:51:21.533961  146734 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-384766"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 22:51:21.534020  146734 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 22:51:21.542124  146734 command_runner.go:130] > kubeadm
	I1222 22:51:21.542141  146734 command_runner.go:130] > kubectl
	I1222 22:51:21.542144  146734 command_runner.go:130] > kubelet
	I1222 22:51:21.542165  146734 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 22:51:21.542214  146734 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 22:51:21.549624  146734 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1222 22:51:21.561393  146734 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 22:51:21.572932  146734 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I1222 22:51:21.584412  146734 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1222 22:51:21.587798  146734 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1222 22:51:21.587903  146734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 22:51:21.667778  146734 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 22:51:21.997732  146734 certs.go:69] Setting up /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766 for IP: 192.168.49.2
	I1222 22:51:21.997755  146734 certs.go:195] generating shared ca certs ...
	I1222 22:51:21.997774  146734 certs.go:227] acquiring lock for ca certs: {Name:mk952cc8302daab7c0050aedd5db4002f6808128 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 22:51:21.997942  146734 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.key
	I1222 22:51:21.998024  146734 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.key
	I1222 22:51:21.998042  146734 certs.go:257] generating profile certs ...
	I1222 22:51:21.998184  146734 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.key
	I1222 22:51:21.998247  146734 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/apiserver.key.c9e079a8
	I1222 22:51:21.998298  146734 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/proxy-client.key
	I1222 22:51:21.998317  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1222 22:51:21.998340  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1222 22:51:21.998365  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1222 22:51:21.998382  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1222 22:51:21.998399  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1222 22:51:21.998418  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1222 22:51:21.998436  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1222 22:51:21.998454  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1222 22:51:21.998527  146734 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803.pem (1338 bytes)
	W1222 22:51:21.998578  146734 certs.go:480] ignoring /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803_empty.pem, impossibly tiny 0 bytes
	I1222 22:51:21.998635  146734 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem (1675 bytes)
	I1222 22:51:21.998684  146734 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem (1082 bytes)
	I1222 22:51:21.998717  146734 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem (1123 bytes)
	I1222 22:51:21.998750  146734 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem (1679 bytes)
	I1222 22:51:21.998813  146734 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem (1708 bytes)
	I1222 22:51:21.998854  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem -> /usr/share/ca-certificates/758032.pem
	I1222 22:51:21.998877  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1222 22:51:21.998896  146734 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803.pem -> /usr/share/ca-certificates/75803.pem
	I1222 22:51:21.999493  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 22:51:22.018141  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 22:51:22.036416  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 22:51:22.053080  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 22:51:22.069323  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 22:51:22.085369  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 22:51:22.101485  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 22:51:22.117634  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1222 22:51:22.133612  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem --> /usr/share/ca-certificates/758032.pem (1708 bytes)
	I1222 22:51:22.150125  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 22:51:22.166578  146734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803.pem --> /usr/share/ca-certificates/75803.pem (1338 bytes)
	I1222 22:51:22.182911  146734 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 22:51:22.194486  146734 ssh_runner.go:195] Run: openssl version
	I1222 22:51:22.199935  146734 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1222 22:51:22.200169  146734 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 22:51:22.206913  146734 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 22:51:22.213732  146734 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 22:51:22.217037  146734 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 22 22:33 /usr/share/ca-certificates/minikubeCA.pem
	I1222 22:51:22.217075  146734 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 22:33 /usr/share/ca-certificates/minikubeCA.pem
	I1222 22:51:22.217111  146734 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 22:51:22.249675  146734 command_runner.go:130] > b5213941
	I1222 22:51:22.250033  146734 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 22:51:22.257095  146734 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/75803.pem
	I1222 22:51:22.264071  146734 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/75803.pem /etc/ssl/certs/75803.pem
	I1222 22:51:22.271042  146734 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75803.pem
	I1222 22:51:22.274411  146734 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 22 22:42 /usr/share/ca-certificates/75803.pem
	I1222 22:51:22.274445  146734 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 22:42 /usr/share/ca-certificates/75803.pem
	I1222 22:51:22.274483  146734 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75803.pem
	I1222 22:51:22.307772  146734 command_runner.go:130] > 51391683
	I1222 22:51:22.308113  146734 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 22:51:22.315176  146734 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/758032.pem
	I1222 22:51:22.322196  146734 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/758032.pem /etc/ssl/certs/758032.pem
	I1222 22:51:22.329109  146734 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/758032.pem
	I1222 22:51:22.332667  146734 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 22 22:42 /usr/share/ca-certificates/758032.pem
	I1222 22:51:22.332691  146734 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 22:42 /usr/share/ca-certificates/758032.pem
	I1222 22:51:22.332732  146734 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/758032.pem
	I1222 22:51:22.365940  146734 command_runner.go:130] > 3ec20f2e
	I1222 22:51:22.366181  146734 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 22:51:22.373802  146734 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 22:51:22.377513  146734 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 22:51:22.377537  146734 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1222 22:51:22.377543  146734 command_runner.go:130] > Device: 8,1	Inode: 809094      Links: 1
	I1222 22:51:22.377550  146734 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1222 22:51:22.377558  146734 command_runner.go:130] > Access: 2025-12-22 22:47:15.370061162 +0000
	I1222 22:51:22.377566  146734 command_runner.go:130] > Modify: 2025-12-22 22:43:13.446668027 +0000
	I1222 22:51:22.377574  146734 command_runner.go:130] > Change: 2025-12-22 22:43:13.446668027 +0000
	I1222 22:51:22.377602  146734 command_runner.go:130] >  Birth: 2025-12-22 22:43:13.446668027 +0000
	I1222 22:51:22.377678  146734 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1222 22:51:22.411266  146734 command_runner.go:130] > Certificate will not expire
	I1222 22:51:22.411570  146734 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1222 22:51:22.445025  146734 command_runner.go:130] > Certificate will not expire
	I1222 22:51:22.445322  146734 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1222 22:51:22.479095  146734 command_runner.go:130] > Certificate will not expire
	I1222 22:51:22.479395  146734 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1222 22:51:22.512263  146734 command_runner.go:130] > Certificate will not expire
	I1222 22:51:22.512537  146734 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1222 22:51:22.545264  146734 command_runner.go:130] > Certificate will not expire
	I1222 22:51:22.545554  146734 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1222 22:51:22.578867  146734 command_runner.go:130] > Certificate will not expire
	I1222 22:51:22.579164  146734 kubeadm.go:401] StartCluster: {Name:functional-384766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-384766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 22:51:22.579364  146734 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1222 22:51:22.598061  146734 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 22:51:22.605833  146734 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1222 22:51:22.605851  146734 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1222 22:51:22.605860  146734 command_runner.go:130] > /var/lib/minikube/etcd:
	I1222 22:51:22.605880  146734 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1222 22:51:22.605891  146734 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1222 22:51:22.605932  146734 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1222 22:51:22.613011  146734 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1222 22:51:22.613379  146734 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-384766" does not appear in /home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1222 22:51:22.613493  146734 kubeconfig.go:62] /home/jenkins/minikube-integration/22301-72233/kubeconfig needs updating (will repair): [kubeconfig missing "functional-384766" cluster setting kubeconfig missing "functional-384766" context setting]
	I1222 22:51:22.613840  146734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/kubeconfig: {Name:mkabb5ea92c3fe748f610038efb5c58128364c71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 22:51:22.614238  146734 loader.go:405] Config loaded from file:  /home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1222 22:51:22.614401  146734 kapi.go:59] client config for functional-384766: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.crt", KeyFile:"/home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.key", CAFile:"/home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2765fe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1222 22:51:22.614887  146734 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1222 22:51:22.614906  146734 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1222 22:51:22.614915  146734 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=true
	I1222 22:51:22.614921  146734 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=true
	I1222 22:51:22.614926  146734 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=true
	I1222 22:51:22.614933  146734 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1222 22:51:22.614941  146734 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1222 22:51:22.615340  146734 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1222 22:51:22.622321  146734 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1222 22:51:22.622350  146734 kubeadm.go:602] duration metric: took 16.45181ms to restartPrimaryControlPlane
	I1222 22:51:22.622360  146734 kubeadm.go:403] duration metric: took 43.204719ms to StartCluster
	I1222 22:51:22.622376  146734 settings.go:142] acquiring lock: {Name:mk05aa406dacdbba79fec0b7e7f355491ea46bf8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 22:51:22.622430  146734 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1222 22:51:22.622875  146734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/kubeconfig: {Name:mkabb5ea92c3fe748f610038efb5c58128364c71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 22:51:22.623066  146734 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1222 22:51:22.623138  146734 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1222 22:51:22.623233  146734 addons.go:70] Setting storage-provisioner=true in profile "functional-384766"
	I1222 22:51:22.623261  146734 addons.go:239] Setting addon storage-provisioner=true in "functional-384766"
	I1222 22:51:22.623284  146734 config.go:182] Loaded profile config "functional-384766": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1222 22:51:22.623296  146734 host.go:66] Checking if "functional-384766" exists ...
	I1222 22:51:22.623288  146734 addons.go:70] Setting default-storageclass=true in profile "functional-384766"
	I1222 22:51:22.623322  146734 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-384766"
	I1222 22:51:22.623660  146734 cli_runner.go:164] Run: docker container inspect functional-384766 --format={{.State.Status}}
	I1222 22:51:22.623809  146734 cli_runner.go:164] Run: docker container inspect functional-384766 --format={{.State.Status}}
	I1222 22:51:22.624438  146734 out.go:179] * Verifying Kubernetes components...
	I1222 22:51:22.625531  146734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 22:51:22.644170  146734 loader.go:405] Config loaded from file:  /home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1222 22:51:22.644380  146734 kapi.go:59] client config for functional-384766: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.crt", KeyFile:"/home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.key", CAFile:"/home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2765fe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1222 22:51:22.644456  146734 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 22:51:22.644766  146734 addons.go:239] Setting addon default-storageclass=true in "functional-384766"
	I1222 22:51:22.644810  146734 host.go:66] Checking if "functional-384766" exists ...
	I1222 22:51:22.645336  146734 cli_runner.go:164] Run: docker container inspect functional-384766 --format={{.State.Status}}
	I1222 22:51:22.645513  146734 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:51:22.645531  146734 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1222 22:51:22.645584  146734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:51:22.667387  146734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
	I1222 22:51:22.668028  146734 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1222 22:51:22.668061  146734 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1222 22:51:22.668129  146734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:51:22.686127  146734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
	I1222 22:51:22.735817  146734 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 22:51:22.749391  146734 node_ready.go:35] waiting up to 6m0s for node "functional-384766" to be "Ready" ...
	I1222 22:51:22.749553  146734 type.go:165] "Request Body" body=""
	I1222 22:51:22.749681  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:22.749924  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:22.791529  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:51:22.791702  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:51:22.858228  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:22.858293  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:22.858334  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:22.858349  146734 retry.go:84] will retry after 300ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:22.860247  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:23.114793  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:51:23.124266  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:51:23.170075  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:23.170134  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:23.179073  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:23.179145  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:23.250305  146734 type.go:165] "Request Body" body=""
	I1222 22:51:23.250418  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:23.250774  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:23.384101  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:51:23.434813  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:23.434866  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:23.600155  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:51:23.651352  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:23.651412  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:23.749655  146734 type.go:165] "Request Body" body=""
	I1222 22:51:23.749735  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:23.750072  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:23.901355  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:51:23.952200  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:23.952267  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:24.239666  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:51:24.250121  146734 type.go:165] "Request Body" body=""
	I1222 22:51:24.250189  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:24.250430  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:24.294448  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:24.294492  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:24.750059  146734 type.go:165] "Request Body" body=""
	I1222 22:51:24.750149  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:24.750512  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:24.750582  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:24.937883  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:51:24.989534  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:24.989576  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:25.250004  146734 type.go:165] "Request Body" body=""
	I1222 22:51:25.250083  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:25.250431  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:25.372773  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:51:25.425171  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:25.425216  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:25.749629  146734 type.go:165] "Request Body" body=""
	I1222 22:51:25.749702  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:25.750010  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:26.170572  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:51:26.222069  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:26.222131  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:26.250327  146734 type.go:165] "Request Body" body=""
	I1222 22:51:26.250414  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:26.250759  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:26.440137  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:51:26.491948  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:26.492006  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:26.750438  146734 type.go:165] "Request Body" body=""
	I1222 22:51:26.750538  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:26.750885  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:26.750943  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:27.250541  146734 type.go:165] "Request Body" body=""
	I1222 22:51:27.250646  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:27.250950  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:27.355175  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:51:27.403566  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:27.406149  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:27.749989  146734 type.go:165] "Request Body" body=""
	I1222 22:51:27.750066  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:27.750396  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:28.250002  146734 type.go:165] "Request Body" body=""
	I1222 22:51:28.250075  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:28.250397  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:28.438810  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:51:28.487114  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:28.489616  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:28.750061  146734 type.go:165] "Request Body" body=""
	I1222 22:51:28.750134  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:28.750419  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:29.250032  146734 type.go:165] "Request Body" body=""
	I1222 22:51:29.250106  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:29.250445  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:29.250522  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:29.750041  146734 type.go:165] "Request Body" body=""
	I1222 22:51:29.750138  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:29.750509  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:30.249736  146734 type.go:165] "Request Body" body=""
	I1222 22:51:30.249807  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:30.250111  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:30.636760  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:51:30.689934  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:30.689988  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:30.750216  146734 type.go:165] "Request Body" body=""
	I1222 22:51:30.750316  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:30.750667  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:31.250328  146734 type.go:165] "Request Body" body=""
	I1222 22:51:31.250434  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:31.250799  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:31.250876  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:31.750450  146734 type.go:165] "Request Body" body=""
	I1222 22:51:31.750530  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:31.750869  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:32.250483  146734 type.go:165] "Request Body" body=""
	I1222 22:51:32.250580  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:32.250950  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:32.711876  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:51:32.750368  146734 type.go:165] "Request Body" body=""
	I1222 22:51:32.750445  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:32.750774  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:32.760899  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:32.763771  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:33.250469  146734 type.go:165] "Request Body" body=""
	I1222 22:51:33.250543  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:33.250856  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:33.250917  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:33.406152  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:51:33.457687  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:33.457745  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:33.750192  146734 type.go:165] "Request Body" body=""
	I1222 22:51:33.750291  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:33.750643  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:34.250274  146734 type.go:165] "Request Body" body=""
	I1222 22:51:34.250352  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:34.250676  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:34.749812  146734 type.go:165] "Request Body" body=""
	I1222 22:51:34.749877  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:34.750166  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:35.249755  146734 type.go:165] "Request Body" body=""
	I1222 22:51:35.249850  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:35.250178  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:35.516575  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:51:35.570400  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:35.570450  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:35.749757  146734 type.go:165] "Request Body" body=""
	I1222 22:51:35.749831  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:35.750172  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:35.750238  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:36.249789  146734 type.go:165] "Request Body" body=""
	I1222 22:51:36.249888  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:36.250250  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:36.749817  146734 type.go:165] "Request Body" body=""
	I1222 22:51:36.749889  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:36.750217  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:37.249837  146734 type.go:165] "Request Body" body=""
	I1222 22:51:37.249921  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:37.250262  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:37.750101  146734 type.go:165] "Request Body" body=""
	I1222 22:51:37.750202  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:37.750527  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:37.750609  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:38.250259  146734 type.go:165] "Request Body" body=""
	I1222 22:51:38.250333  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:38.250692  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:38.358924  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:51:38.409955  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:38.410034  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:38.750557  146734 type.go:165] "Request Body" body=""
	I1222 22:51:38.750654  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:38.750998  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:39.249528  146734 type.go:165] "Request Body" body=""
	I1222 22:51:39.249647  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:39.249920  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:39.749563  146734 type.go:165] "Request Body" body=""
	I1222 22:51:39.749697  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:39.750029  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:40.249635  146734 type.go:165] "Request Body" body=""
	I1222 22:51:40.249710  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:40.250037  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:40.250107  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:40.749663  146734 type.go:165] "Request Body" body=""
	I1222 22:51:40.749734  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:40.750058  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:41.249687  146734 type.go:165] "Request Body" body=""
	I1222 22:51:41.249766  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:41.250113  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:41.749718  146734 type.go:165] "Request Body" body=""
	I1222 22:51:41.749834  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:41.750194  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:42.249756  146734 type.go:165] "Request Body" body=""
	I1222 22:51:42.249861  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:42.250194  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:42.250268  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:42.750151  146734 type.go:165] "Request Body" body=""
	I1222 22:51:42.750272  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:42.750674  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:43.250325  146734 type.go:165] "Request Body" body=""
	I1222 22:51:43.250412  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:43.250779  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:43.750422  146734 type.go:165] "Request Body" body=""
	I1222 22:51:43.750532  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:43.750837  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:44.250504  146734 type.go:165] "Request Body" body=""
	I1222 22:51:44.250574  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:44.250924  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:44.250995  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:44.670446  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:51:44.719419  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:44.722302  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:44.722343  146734 retry.go:84] will retry after 11.9s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:44.750527  146734 type.go:165] "Request Body" body=""
	I1222 22:51:44.750632  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:44.750954  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:45.250633  146734 type.go:165] "Request Body" body=""
	I1222 22:51:45.250718  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:45.251044  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:45.749665  146734 type.go:165] "Request Body" body=""
	I1222 22:51:45.749749  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:45.750104  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:46.249725  146734 type.go:165] "Request Body" body=""
	I1222 22:51:46.249806  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:46.250136  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:46.749687  146734 type.go:165] "Request Body" body=""
	I1222 22:51:46.749756  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:46.750050  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:46.750108  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:47.249647  146734 type.go:165] "Request Body" body=""
	I1222 22:51:47.249753  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:47.250081  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:47.750272  146734 type.go:165] "Request Body" body=""
	I1222 22:51:47.750344  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:47.750626  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:48.250351  146734 type.go:165] "Request Body" body=""
	I1222 22:51:48.250445  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:48.250816  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:48.750455  146734 type.go:165] "Request Body" body=""
	I1222 22:51:48.750540  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:48.750902  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:48.750964  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:49.250555  146734 type.go:165] "Request Body" body=""
	I1222 22:51:49.250653  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:49.250985  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:49.750603  146734 type.go:165] "Request Body" body=""
	I1222 22:51:49.750681  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:49.750999  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:50.249551  146734 type.go:165] "Request Body" body=""
	I1222 22:51:50.249641  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:50.249968  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:50.750576  146734 type.go:165] "Request Body" body=""
	I1222 22:51:50.750686  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:50.751008  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:50.751079  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:51.249557  146734 type.go:165] "Request Body" body=""
	I1222 22:51:51.249656  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:51.249983  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:51.749684  146734 type.go:165] "Request Body" body=""
	I1222 22:51:51.749763  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:51.750094  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:52.249698  146734 type.go:165] "Request Body" body=""
	I1222 22:51:52.249792  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:52.250118  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:52.572783  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:51:52.624461  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:52.624509  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:52.624539  146734 retry.go:84] will retry after 8.9s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:52.749682  146734 type.go:165] "Request Body" body=""
	I1222 22:51:52.749751  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:52.750065  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:53.249705  146734 type.go:165] "Request Body" body=""
	I1222 22:51:53.249792  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:53.250136  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:53.250202  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:53.749743  146734 type.go:165] "Request Body" body=""
	I1222 22:51:53.749827  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:53.750165  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:54.249818  146734 type.go:165] "Request Body" body=""
	I1222 22:51:54.249922  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:54.250320  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:54.749879  146734 type.go:165] "Request Body" body=""
	I1222 22:51:54.749983  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:54.750325  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:55.249732  146734 type.go:165] "Request Body" body=""
	I1222 22:51:55.249811  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:55.250186  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:55.250256  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:55.749720  146734 type.go:165] "Request Body" body=""
	I1222 22:51:55.749797  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:55.750101  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:56.249700  146734 type.go:165] "Request Body" body=""
	I1222 22:51:56.249777  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:56.250119  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:56.630698  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:51:56.682682  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:51:56.682728  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:56.682750  146734 retry.go:84] will retry after 19.1s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:51:56.749962  146734 type.go:165] "Request Body" body=""
	I1222 22:51:56.750043  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:56.750390  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:57.249782  146734 type.go:165] "Request Body" body=""
	I1222 22:51:57.249859  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:57.250169  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:57.750032  146734 type.go:165] "Request Body" body=""
	I1222 22:51:57.750112  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:57.750459  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:57.750526  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:51:58.250052  146734 type.go:165] "Request Body" body=""
	I1222 22:51:58.250129  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:58.250484  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:58.750074  146734 type.go:165] "Request Body" body=""
	I1222 22:51:58.750164  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:58.750559  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:59.250376  146734 type.go:165] "Request Body" body=""
	I1222 22:51:59.250455  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:59.250821  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:51:59.750547  146734 type.go:165] "Request Body" body=""
	I1222 22:51:59.750668  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:51:59.751053  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:51:59.751124  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:00.249679  146734 type.go:165] "Request Body" body=""
	I1222 22:52:00.249756  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:00.250124  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:00.749766  146734 type.go:165] "Request Body" body=""
	I1222 22:52:00.749870  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:00.750200  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:01.249805  146734 type.go:165] "Request Body" body=""
	I1222 22:52:01.249878  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:01.250214  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:01.555677  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:52:01.608817  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:52:01.608873  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:52:01.608898  146734 retry.go:84] will retry after 11.1s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:52:01.750139  146734 type.go:165] "Request Body" body=""
	I1222 22:52:01.750232  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:01.750541  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:02.250350  146734 type.go:165] "Request Body" body=""
	I1222 22:52:02.250446  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:02.250884  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:02.250959  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:02.749991  146734 type.go:165] "Request Body" body=""
	I1222 22:52:02.750087  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:02.750489  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:03.249785  146734 type.go:165] "Request Body" body=""
	I1222 22:52:03.249886  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:03.250222  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:03.749863  146734 type.go:165] "Request Body" body=""
	I1222 22:52:03.749953  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:03.750330  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:04.249910  146734 type.go:165] "Request Body" body=""
	I1222 22:52:04.249991  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:04.250323  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:04.749787  146734 type.go:165] "Request Body" body=""
	I1222 22:52:04.749878  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:04.750255  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:04.750328  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:05.249805  146734 type.go:165] "Request Body" body=""
	I1222 22:52:05.249881  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:05.250215  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:05.749829  146734 type.go:165] "Request Body" body=""
	I1222 22:52:05.749905  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:05.750236  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:06.249768  146734 type.go:165] "Request Body" body=""
	I1222 22:52:06.249853  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:06.250166  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:06.749813  146734 type.go:165] "Request Body" body=""
	I1222 22:52:06.749962  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:06.750292  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:06.750353  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:07.249913  146734 type.go:165] "Request Body" body=""
	I1222 22:52:07.249997  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:07.250350  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:07.750157  146734 type.go:165] "Request Body" body=""
	I1222 22:52:07.750249  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:07.750625  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:08.250269  146734 type.go:165] "Request Body" body=""
	I1222 22:52:08.250349  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:08.250699  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:08.750338  146734 type.go:165] "Request Body" body=""
	I1222 22:52:08.750417  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:08.750817  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:08.750880  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:09.250447  146734 type.go:165] "Request Body" body=""
	I1222 22:52:09.250517  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:09.250886  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:09.750542  146734 type.go:165] "Request Body" body=""
	I1222 22:52:09.750651  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:09.751017  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:10.249569  146734 type.go:165] "Request Body" body=""
	I1222 22:52:10.249667  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:10.250007  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:10.749614  146734 type.go:165] "Request Body" body=""
	I1222 22:52:10.749698  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:10.749986  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:11.249644  146734 type.go:165] "Request Body" body=""
	I1222 22:52:11.249721  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:11.250050  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:11.250115  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:11.749702  146734 type.go:165] "Request Body" body=""
	I1222 22:52:11.749781  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:11.750159  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:12.250570  146734 type.go:165] "Request Body" body=""
	I1222 22:52:12.250676  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:12.251019  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:12.749204  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:52:12.749953  146734 type.go:165] "Request Body" body=""
	I1222 22:52:12.750037  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:12.750364  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:12.803295  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:52:12.803361  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:52:12.803388  146734 retry.go:84] will retry after 41s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:52:13.249864  146734 type.go:165] "Request Body" body=""
	I1222 22:52:13.249961  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:13.250341  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:13.250413  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:13.749947  146734 type.go:165] "Request Body" body=""
	I1222 22:52:13.750050  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:13.750385  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:14.249969  146734 type.go:165] "Request Body" body=""
	I1222 22:52:14.250047  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:14.250429  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:14.750035  146734 type.go:165] "Request Body" body=""
	I1222 22:52:14.750108  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:14.750442  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:15.249717  146734 type.go:165] "Request Body" body=""
	I1222 22:52:15.249827  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:15.250182  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:15.749736  146734 type.go:165] "Request Body" body=""
	I1222 22:52:15.749817  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:15.750176  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:15.750272  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:15.781356  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:52:15.834579  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:52:15.834644  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:52:15.834678  146734 retry.go:84] will retry after 22s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:52:16.250185  146734 type.go:165] "Request Body" body=""
	I1222 22:52:16.250285  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:16.250641  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:16.750300  146734 type.go:165] "Request Body" body=""
	I1222 22:52:16.750391  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:16.750749  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:17.250375  146734 type.go:165] "Request Body" body=""
	I1222 22:52:17.250470  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:17.250796  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:17.750663  146734 type.go:165] "Request Body" body=""
	I1222 22:52:17.750770  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:17.751155  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:17.751219  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:18.249705  146734 type.go:165] "Request Body" body=""
	I1222 22:52:18.249774  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:18.250099  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:18.749719  146734 type.go:165] "Request Body" body=""
	I1222 22:52:18.749823  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:18.750127  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:19.249772  146734 type.go:165] "Request Body" body=""
	I1222 22:52:19.249846  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:19.250172  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:19.749726  146734 type.go:165] "Request Body" body=""
	I1222 22:52:19.749805  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:19.750128  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:20.249691  146734 type.go:165] "Request Body" body=""
	I1222 22:52:20.249767  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:20.250123  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:20.250193  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:20.749739  146734 type.go:165] "Request Body" body=""
	I1222 22:52:20.749820  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:20.750153  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:21.249730  146734 type.go:165] "Request Body" body=""
	I1222 22:52:21.249821  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:21.250178  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:21.749804  146734 type.go:165] "Request Body" body=""
	I1222 22:52:21.749892  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:21.750232  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:22.249806  146734 type.go:165] "Request Body" body=""
	I1222 22:52:22.249886  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:22.250237  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:22.250298  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:22.750142  146734 type.go:165] "Request Body" body=""
	I1222 22:52:22.750215  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:22.750516  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:23.250222  146734 type.go:165] "Request Body" body=""
	I1222 22:52:23.250332  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:23.250708  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:23.750438  146734 type.go:165] "Request Body" body=""
	I1222 22:52:23.750532  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:23.750972  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:24.249568  146734 type.go:165] "Request Body" body=""
	I1222 22:52:24.249660  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:24.249969  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:24.749566  146734 type.go:165] "Request Body" body=""
	I1222 22:52:24.749670  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:24.750007  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:24.750078  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:25.249560  146734 type.go:165] "Request Body" body=""
	I1222 22:52:25.249668  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:25.250009  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:25.749615  146734 type.go:165] "Request Body" body=""
	I1222 22:52:25.749711  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:25.750086  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:26.249720  146734 type.go:165] "Request Body" body=""
	I1222 22:52:26.249804  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:26.250176  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:26.749791  146734 type.go:165] "Request Body" body=""
	I1222 22:52:26.749896  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:26.750250  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:26.750329  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:27.249744  146734 type.go:165] "Request Body" body=""
	I1222 22:52:27.249822  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:27.250119  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:27.749959  146734 type.go:165] "Request Body" body=""
	I1222 22:52:27.750049  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:27.750408  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:28.249981  146734 type.go:165] "Request Body" body=""
	I1222 22:52:28.250077  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:28.250414  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:28.749755  146734 type.go:165] "Request Body" body=""
	I1222 22:52:28.749827  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:28.750148  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:29.249816  146734 type.go:165] "Request Body" body=""
	I1222 22:52:29.249914  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:29.250248  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:29.250329  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:29.749820  146734 type.go:165] "Request Body" body=""
	I1222 22:52:29.749892  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:29.750206  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:30.249734  146734 type.go:165] "Request Body" body=""
	I1222 22:52:30.249842  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:30.250163  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:30.749684  146734 type.go:165] "Request Body" body=""
	I1222 22:52:30.749763  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:30.750086  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:31.249714  146734 type.go:165] "Request Body" body=""
	I1222 22:52:31.249787  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:31.250100  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:31.749725  146734 type.go:165] "Request Body" body=""
	I1222 22:52:31.749812  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:31.750152  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:31.750223  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:32.249759  146734 type.go:165] "Request Body" body=""
	I1222 22:52:32.249872  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:32.250213  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:32.750284  146734 type.go:165] "Request Body" body=""
	I1222 22:52:32.750382  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:32.750808  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:33.250484  146734 type.go:165] "Request Body" body=""
	I1222 22:52:33.250553  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:33.250856  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:33.750582  146734 type.go:165] "Request Body" body=""
	I1222 22:52:33.750682  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:33.751084  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:33.751151  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:34.249678  146734 type.go:165] "Request Body" body=""
	I1222 22:52:34.249771  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:34.250118  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:34.749750  146734 type.go:165] "Request Body" body=""
	I1222 22:52:34.749834  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:34.750178  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:35.249794  146734 type.go:165] "Request Body" body=""
	I1222 22:52:35.249866  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:35.250195  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:35.749858  146734 type.go:165] "Request Body" body=""
	I1222 22:52:35.749938  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:35.750250  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:36.249945  146734 type.go:165] "Request Body" body=""
	I1222 22:52:36.250030  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:36.250372  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:36.250452  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:36.750049  146734 type.go:165] "Request Body" body=""
	I1222 22:52:36.750122  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:36.750536  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:37.250238  146734 type.go:165] "Request Body" body=""
	I1222 22:52:37.250338  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:37.250710  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:37.750607  146734 type.go:165] "Request Body" body=""
	I1222 22:52:37.750693  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:37.751037  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:37.791261  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:52:37.841791  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:52:37.841848  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:52:37.841882  146734 retry.go:84] will retry after 24.9s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 22:52:38.250412  146734 type.go:165] "Request Body" body=""
	I1222 22:52:38.250501  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:38.250856  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:38.250927  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:38.750539  146734 type.go:165] "Request Body" body=""
	I1222 22:52:38.750640  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:38.750989  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:39.250534  146734 type.go:165] "Request Body" body=""
	I1222 22:52:39.250619  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:39.250903  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:39.750622  146734 type.go:165] "Request Body" body=""
	I1222 22:52:39.750769  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:39.751121  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:40.249717  146734 type.go:165] "Request Body" body=""
	I1222 22:52:40.249817  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:40.250172  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:40.749725  146734 type.go:165] "Request Body" body=""
	I1222 22:52:40.749800  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:40.750112  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:40.750183  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:41.249733  146734 type.go:165] "Request Body" body=""
	I1222 22:52:41.249816  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:41.250159  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:41.749758  146734 type.go:165] "Request Body" body=""
	I1222 22:52:41.749830  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:41.750172  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:42.249740  146734 type.go:165] "Request Body" body=""
	I1222 22:52:42.249820  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:42.250217  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:42.750262  146734 type.go:165] "Request Body" body=""
	I1222 22:52:42.750373  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:42.750720  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:42.750785  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:43.250407  146734 type.go:165] "Request Body" body=""
	I1222 22:52:43.250502  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:43.250877  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:43.750507  146734 type.go:165] "Request Body" body=""
	I1222 22:52:43.750607  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:43.750955  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:44.250608  146734 type.go:165] "Request Body" body=""
	I1222 22:52:44.250729  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:44.251071  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:44.749661  146734 type.go:165] "Request Body" body=""
	I1222 22:52:44.749764  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:44.750070  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:45.249668  146734 type.go:165] "Request Body" body=""
	I1222 22:52:45.249750  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:45.250041  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:45.250109  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:45.749753  146734 type.go:165] "Request Body" body=""
	I1222 22:52:45.749827  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:45.750181  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:46.249757  146734 type.go:165] "Request Body" body=""
	I1222 22:52:46.249830  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:46.250177  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:46.749749  146734 type.go:165] "Request Body" body=""
	I1222 22:52:46.749824  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:46.750208  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:47.249770  146734 type.go:165] "Request Body" body=""
	I1222 22:52:47.249839  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:47.250180  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:47.250253  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:47.750010  146734 type.go:165] "Request Body" body=""
	I1222 22:52:47.750110  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:47.750420  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:48.249698  146734 type.go:165] "Request Body" body=""
	I1222 22:52:48.249771  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:48.250079  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:48.750048  146734 type.go:165] "Request Body" body=""
	I1222 22:52:48.750143  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:48.750506  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:49.250090  146734 type.go:165] "Request Body" body=""
	I1222 22:52:49.250180  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:49.250514  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:49.250621  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:49.750101  146734 type.go:165] "Request Body" body=""
	I1222 22:52:49.750203  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:49.750541  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:50.250324  146734 type.go:165] "Request Body" body=""
	I1222 22:52:50.250405  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:50.250760  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:50.750452  146734 type.go:165] "Request Body" body=""
	I1222 22:52:50.750541  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:50.750912  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:51.249606  146734 type.go:165] "Request Body" body=""
	I1222 22:52:51.249697  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:51.250034  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:51.749707  146734 type.go:165] "Request Body" body=""
	I1222 22:52:51.749808  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:51.750156  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:51.750217  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:52.249714  146734 type.go:165] "Request Body" body=""
	I1222 22:52:52.249790  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:52.250101  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:52.750121  146734 type.go:165] "Request Body" body=""
	I1222 22:52:52.750209  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:52.750561  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:53.250207  146734 type.go:165] "Request Body" body=""
	I1222 22:52:53.250279  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:53.250649  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:53.750285  146734 type.go:165] "Request Body" body=""
	I1222 22:52:53.750382  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:53.750757  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:53.750818  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:53.825965  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 22:52:53.875648  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:52:53.878317  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:52:53.878441  146734 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1222 22:52:54.249881  146734 type.go:165] "Request Body" body=""
	I1222 22:52:54.249965  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:54.250291  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:54.749880  146734 type.go:165] "Request Body" body=""
	I1222 22:52:54.749992  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:54.750339  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:55.249961  146734 type.go:165] "Request Body" body=""
	I1222 22:52:55.250051  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:55.250408  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:55.750006  146734 type.go:165] "Request Body" body=""
	I1222 22:52:55.750102  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:55.750525  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:56.250142  146734 type.go:165] "Request Body" body=""
	I1222 22:52:56.250214  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:56.250523  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:56.250588  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:56.750239  146734 type.go:165] "Request Body" body=""
	I1222 22:52:56.750323  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:56.750714  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:57.250354  146734 type.go:165] "Request Body" body=""
	I1222 22:52:57.250424  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:57.250804  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:57.750628  146734 type.go:165] "Request Body" body=""
	I1222 22:52:57.750717  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:57.751065  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:58.249685  146734 type.go:165] "Request Body" body=""
	I1222 22:52:58.249757  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:58.250088  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:58.749668  146734 type.go:165] "Request Body" body=""
	I1222 22:52:58.749748  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:58.750061  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:52:58.750137  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:52:59.249726  146734 type.go:165] "Request Body" body=""
	I1222 22:52:59.249899  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:59.250271  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:52:59.749844  146734 type.go:165] "Request Body" body=""
	I1222 22:52:59.749922  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:52:59.750248  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:00.249741  146734 type.go:165] "Request Body" body=""
	I1222 22:53:00.249825  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:00.250192  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:00.749724  146734 type.go:165] "Request Body" body=""
	I1222 22:53:00.749826  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:00.750162  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:00.750221  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:01.249640  146734 type.go:165] "Request Body" body=""
	I1222 22:53:01.249734  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:01.250068  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:01.749640  146734 type.go:165] "Request Body" body=""
	I1222 22:53:01.749713  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:01.749993  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:02.249646  146734 type.go:165] "Request Body" body=""
	I1222 22:53:02.249726  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:02.250075  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:02.750082  146734 type.go:165] "Request Body" body=""
	I1222 22:53:02.750162  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:02.750495  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:02.750554  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:02.761644  146734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 22:53:02.811523  146734 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:53:02.814145  146734 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 22:53:02.814242  146734 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1222 22:53:02.815929  146734 out.go:179] * Enabled addons: 
	I1222 22:53:02.817068  146734 addons.go:530] duration metric: took 1m40.193946362s for enable addons: enabled=[]
	I1222 22:53:03.249698  146734 type.go:165] "Request Body" body=""
	I1222 22:53:03.249788  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:03.250104  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:03.749742  146734 type.go:165] "Request Body" body=""
	I1222 22:53:03.749825  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:03.750182  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:04.249738  146734 type.go:165] "Request Body" body=""
	I1222 22:53:04.249809  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:04.250142  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:04.749826  146734 type.go:165] "Request Body" body=""
	I1222 22:53:04.749903  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:04.750163  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:05.249723  146734 type.go:165] "Request Body" body=""
	I1222 22:53:05.249795  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:05.250136  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:05.250198  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:05.749730  146734 type.go:165] "Request Body" body=""
	I1222 22:53:05.749806  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:05.750146  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:06.249727  146734 type.go:165] "Request Body" body=""
	I1222 22:53:06.249793  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:06.250084  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:06.749693  146734 type.go:165] "Request Body" body=""
	I1222 22:53:06.749808  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:06.750149  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:07.249718  146734 type.go:165] "Request Body" body=""
	I1222 22:53:07.249825  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:07.250157  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:07.250228  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:07.749972  146734 type.go:165] "Request Body" body=""
	I1222 22:53:07.750046  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:07.750368  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:08.249951  146734 type.go:165] "Request Body" body=""
	I1222 22:53:08.250025  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:08.250370  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:08.750009  146734 type.go:165] "Request Body" body=""
	I1222 22:53:08.750097  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:08.750447  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:09.250029  146734 type.go:165] "Request Body" body=""
	I1222 22:53:09.250111  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:09.250445  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:09.250517  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:09.750056  146734 type.go:165] "Request Body" body=""
	I1222 22:53:09.750146  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:09.750490  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:10.250067  146734 type.go:165] "Request Body" body=""
	I1222 22:53:10.250142  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:10.250503  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:10.749730  146734 type.go:165] "Request Body" body=""
	I1222 22:53:10.749826  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:10.750162  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:11.249711  146734 type.go:165] "Request Body" body=""
	I1222 22:53:11.249778  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:11.250102  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:11.749714  146734 type.go:165] "Request Body" body=""
	I1222 22:53:11.749820  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:11.750162  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:11.750229  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:12.249740  146734 type.go:165] "Request Body" body=""
	I1222 22:53:12.249829  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:12.250202  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:12.750297  146734 type.go:165] "Request Body" body=""
	I1222 22:53:12.750400  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:12.750823  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:13.250490  146734 type.go:165] "Request Body" body=""
	I1222 22:53:13.250610  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:13.250943  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:13.750581  146734 type.go:165] "Request Body" body=""
	I1222 22:53:13.750675  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:13.751030  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:13.751101  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:14.249749  146734 type.go:165] "Request Body" body=""
	I1222 22:53:14.249848  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:14.250187  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:14.749731  146734 type.go:165] "Request Body" body=""
	I1222 22:53:14.749805  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:14.750129  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:15.249670  146734 type.go:165] "Request Body" body=""
	I1222 22:53:15.249753  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:15.250047  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:15.749620  146734 type.go:165] "Request Body" body=""
	I1222 22:53:15.749692  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:15.750012  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:16.249630  146734 type.go:165] "Request Body" body=""
	I1222 22:53:16.249712  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:16.250033  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:16.250099  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:16.749569  146734 type.go:165] "Request Body" body=""
	I1222 22:53:16.749652  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:16.750014  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:17.249620  146734 type.go:165] "Request Body" body=""
	I1222 22:53:17.249709  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:17.250072  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:17.749929  146734 type.go:165] "Request Body" body=""
	I1222 22:53:17.750004  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:17.750365  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:18.249926  146734 type.go:165] "Request Body" body=""
	I1222 22:53:18.250000  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:18.250371  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:18.250470  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:18.749909  146734 type.go:165] "Request Body" body=""
	I1222 22:53:18.749982  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:18.750366  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:19.249922  146734 type.go:165] "Request Body" body=""
	I1222 22:53:19.250017  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:19.250357  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:19.750009  146734 type.go:165] "Request Body" body=""
	I1222 22:53:19.750311  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:19.750720  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:20.249710  146734 type.go:165] "Request Body" body=""
	I1222 22:53:20.249792  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:20.250138  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:20.749729  146734 type.go:165] "Request Body" body=""
	I1222 22:53:20.749802  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:20.750122  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:20.750181  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:21.249727  146734 type.go:165] "Request Body" body=""
	I1222 22:53:21.249810  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:21.250104  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:21.749698  146734 type.go:165] "Request Body" body=""
	I1222 22:53:21.749771  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:21.750100  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:22.249641  146734 type.go:165] "Request Body" body=""
	I1222 22:53:22.249720  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:22.250039  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:22.750075  146734 type.go:165] "Request Body" body=""
	I1222 22:53:22.750156  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:22.750471  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:22.750535  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:23.250068  146734 type.go:165] "Request Body" body=""
	I1222 22:53:23.250145  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:23.250478  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:23.750036  146734 type.go:165] "Request Body" body=""
	I1222 22:53:23.750110  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:23.750437  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:24.250017  146734 type.go:165] "Request Body" body=""
	I1222 22:53:24.250093  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:24.250476  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:24.750180  146734 type.go:165] "Request Body" body=""
	I1222 22:53:24.750257  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:24.750588  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:24.750677  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:25.250225  146734 type.go:165] "Request Body" body=""
	I1222 22:53:25.250324  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:25.250676  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:25.749787  146734 type.go:165] "Request Body" body=""
	I1222 22:53:25.749860  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:25.750164  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:26.249730  146734 type.go:165] "Request Body" body=""
	I1222 22:53:26.249811  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:26.250140  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:26.749761  146734 type.go:165] "Request Body" body=""
	I1222 22:53:26.749838  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:26.750123  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:27.249731  146734 type.go:165] "Request Body" body=""
	I1222 22:53:27.249803  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:27.250133  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:27.250212  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:27.749969  146734 type.go:165] "Request Body" body=""
	I1222 22:53:27.750075  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:27.750451  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:28.250072  146734 type.go:165] "Request Body" body=""
	I1222 22:53:28.250148  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:28.250489  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:28.749678  146734 type.go:165] "Request Body" body=""
	I1222 22:53:28.749774  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:28.750036  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:29.249723  146734 type.go:165] "Request Body" body=""
	I1222 22:53:29.249804  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:29.250123  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:29.749729  146734 type.go:165] "Request Body" body=""
	I1222 22:53:29.749802  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:29.750160  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:29.750223  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:30.249702  146734 type.go:165] "Request Body" body=""
	I1222 22:53:30.249782  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:30.250134  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:30.749711  146734 type.go:165] "Request Body" body=""
	I1222 22:53:30.749794  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:30.750154  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:31.249754  146734 type.go:165] "Request Body" body=""
	I1222 22:53:31.249836  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:31.250160  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:31.749713  146734 type.go:165] "Request Body" body=""
	I1222 22:53:31.749788  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:31.750082  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:32.249744  146734 type.go:165] "Request Body" body=""
	I1222 22:53:32.249825  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:32.250166  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:32.250234  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:32.750188  146734 type.go:165] "Request Body" body=""
	I1222 22:53:32.750294  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:32.750695  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:33.250318  146734 type.go:165] "Request Body" body=""
	I1222 22:53:33.250417  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:33.250846  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:33.750507  146734 type.go:165] "Request Body" body=""
	I1222 22:53:33.750613  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:33.750990  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:34.249623  146734 type.go:165] "Request Body" body=""
	I1222 22:53:34.249700  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:34.250041  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:34.749622  146734 type.go:165] "Request Body" body=""
	I1222 22:53:34.749696  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:34.749991  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:34.750050  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:35.249689  146734 type.go:165] "Request Body" body=""
	I1222 22:53:35.249763  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:35.250117  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:35.749695  146734 type.go:165] "Request Body" body=""
	I1222 22:53:35.749776  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:35.750097  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:36.249798  146734 type.go:165] "Request Body" body=""
	I1222 22:53:36.249903  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:36.250277  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:36.749846  146734 type.go:165] "Request Body" body=""
	I1222 22:53:36.749925  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:36.750288  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:36.750366  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:37.249900  146734 type.go:165] "Request Body" body=""
	I1222 22:53:37.249980  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:37.250334  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:37.750193  146734 type.go:165] "Request Body" body=""
	I1222 22:53:37.750266  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:37.750582  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:38.250318  146734 type.go:165] "Request Body" body=""
	I1222 22:53:38.250402  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:38.250780  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:38.750488  146734 type.go:165] "Request Body" body=""
	I1222 22:53:38.750565  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:38.750945  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:38.751009  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:39.250555  146734 type.go:165] "Request Body" body=""
	I1222 22:53:39.250638  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:39.250958  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:39.749568  146734 type.go:165] "Request Body" body=""
	I1222 22:53:39.749673  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:39.750024  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:40.249671  146734 type.go:165] "Request Body" body=""
	I1222 22:53:40.249757  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:40.250102  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:40.749680  146734 type.go:165] "Request Body" body=""
	I1222 22:53:40.749755  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:40.750071  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:41.249740  146734 type.go:165] "Request Body" body=""
	I1222 22:53:41.249827  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:41.250168  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:41.250233  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:41.749722  146734 type.go:165] "Request Body" body=""
	I1222 22:53:41.749804  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:41.750139  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:42.249731  146734 type.go:165] "Request Body" body=""
	I1222 22:53:42.249824  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:42.250150  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:42.750207  146734 type.go:165] "Request Body" body=""
	I1222 22:53:42.750296  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:42.750623  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:43.250305  146734 type.go:165] "Request Body" body=""
	I1222 22:53:43.250405  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:43.250821  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:43.250898  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:43.750513  146734 type.go:165] "Request Body" body=""
	I1222 22:53:43.750619  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:43.750993  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:44.249564  146734 type.go:165] "Request Body" body=""
	I1222 22:53:44.249700  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:44.250049  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:44.749717  146734 type.go:165] "Request Body" body=""
	I1222 22:53:44.749793  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:44.750110  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:45.249673  146734 type.go:165] "Request Body" body=""
	I1222 22:53:45.249765  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:45.250110  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:45.749727  146734 type.go:165] "Request Body" body=""
	I1222 22:53:45.749834  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:45.750168  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:45.750236  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:46.249739  146734 type.go:165] "Request Body" body=""
	I1222 22:53:46.249823  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:46.250174  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:46.749761  146734 type.go:165] "Request Body" body=""
	I1222 22:53:46.749836  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:46.750164  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:47.249735  146734 type.go:165] "Request Body" body=""
	I1222 22:53:47.249821  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:47.250177  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:47.749948  146734 type.go:165] "Request Body" body=""
	I1222 22:53:47.750043  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:47.750397  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:47.750471  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:48.249836  146734 type.go:165] "Request Body" body=""
	I1222 22:53:48.249915  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:48.250168  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:48.749809  146734 type.go:165] "Request Body" body=""
	I1222 22:53:48.749888  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:48.750240  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:49.249818  146734 type.go:165] "Request Body" body=""
	I1222 22:53:49.249899  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:49.250266  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:49.749713  146734 type.go:165] "Request Body" body=""
	I1222 22:53:49.749790  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:49.750104  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:50.249693  146734 type.go:165] "Request Body" body=""
	I1222 22:53:50.249771  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:50.250117  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:50.250187  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:50.749730  146734 type.go:165] "Request Body" body=""
	I1222 22:53:50.749812  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:50.750159  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:51.249780  146734 type.go:165] "Request Body" body=""
	I1222 22:53:51.249878  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:51.250247  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:51.749718  146734 type.go:165] "Request Body" body=""
	I1222 22:53:51.749796  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:51.750134  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:52.249742  146734 type.go:165] "Request Body" body=""
	I1222 22:53:52.249827  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:52.250156  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:52.250222  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:52.750193  146734 type.go:165] "Request Body" body=""
	I1222 22:53:52.750262  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:52.750631  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:53.250282  146734 type.go:165] "Request Body" body=""
	I1222 22:53:53.250365  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:53.250710  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:53.750364  146734 type.go:165] "Request Body" body=""
	I1222 22:53:53.750466  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:53.750806  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:54.250614  146734 type.go:165] "Request Body" body=""
	I1222 22:53:54.250720  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:54.251091  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:54.251160  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:54.749719  146734 type.go:165] "Request Body" body=""
	I1222 22:53:54.749797  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:54.750115  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:55.249723  146734 type.go:165] "Request Body" body=""
	I1222 22:53:55.249798  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:55.250115  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:55.749721  146734 type.go:165] "Request Body" body=""
	I1222 22:53:55.749813  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:55.750120  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:56.249788  146734 type.go:165] "Request Body" body=""
	I1222 22:53:56.249864  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:56.250194  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:56.749823  146734 type.go:165] "Request Body" body=""
	I1222 22:53:56.749924  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:56.750260  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:56.750323  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:57.249824  146734 type.go:165] "Request Body" body=""
	I1222 22:53:57.249911  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:57.250258  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:57.750028  146734 type.go:165] "Request Body" body=""
	I1222 22:53:57.750101  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:57.750434  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:58.250067  146734 type.go:165] "Request Body" body=""
	I1222 22:53:58.250165  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:58.250680  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:58.750319  146734 type.go:165] "Request Body" body=""
	I1222 22:53:58.750397  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:58.750751  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:53:58.750816  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:53:59.250396  146734 type.go:165] "Request Body" body=""
	I1222 22:53:59.250469  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:59.250838  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:53:59.750501  146734 type.go:165] "Request Body" body=""
	I1222 22:53:59.750579  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:53:59.750961  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:00.250615  146734 type.go:165] "Request Body" body=""
	I1222 22:54:00.250698  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:00.251022  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:00.749626  146734 type.go:165] "Request Body" body=""
	I1222 22:54:00.749712  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:00.750067  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:01.249664  146734 type.go:165] "Request Body" body=""
	I1222 22:54:01.249759  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:01.250103  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:01.250170  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:01.749661  146734 type.go:165] "Request Body" body=""
	I1222 22:54:01.749759  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:01.750063  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:02.249721  146734 type.go:165] "Request Body" body=""
	I1222 22:54:02.249806  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:02.250139  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:02.750137  146734 type.go:165] "Request Body" body=""
	I1222 22:54:02.750239  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:02.750626  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:03.249783  146734 type.go:165] "Request Body" body=""
	I1222 22:54:03.249870  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:03.250198  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:03.250261  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:03.749859  146734 type.go:165] "Request Body" body=""
	I1222 22:54:03.749942  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:03.750266  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:04.249837  146734 type.go:165] "Request Body" body=""
	I1222 22:54:04.249924  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:04.250252  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:04.749814  146734 type.go:165] "Request Body" body=""
	I1222 22:54:04.749888  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:04.750218  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:05.249838  146734 type.go:165] "Request Body" body=""
	I1222 22:54:05.249920  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:05.250251  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:05.250311  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:05.749869  146734 type.go:165] "Request Body" body=""
	I1222 22:54:05.749946  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:05.750283  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:06.249832  146734 type.go:165] "Request Body" body=""
	I1222 22:54:06.249906  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:06.250221  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:06.749788  146734 type.go:165] "Request Body" body=""
	I1222 22:54:06.749871  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:06.750209  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:07.249776  146734 type.go:165] "Request Body" body=""
	I1222 22:54:07.249854  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:07.250183  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:07.749835  146734 type.go:165] "Request Body" body=""
	I1222 22:54:07.749920  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:07.750202  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:07.750265  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:08.249824  146734 type.go:165] "Request Body" body=""
	I1222 22:54:08.249910  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:08.250234  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:08.749844  146734 type.go:165] "Request Body" body=""
	I1222 22:54:08.749966  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:08.750291  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:09.249871  146734 type.go:165] "Request Body" body=""
	I1222 22:54:09.249975  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:09.250275  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:09.749927  146734 type.go:165] "Request Body" body=""
	I1222 22:54:09.750002  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:09.750347  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:09.750418  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:10.249886  146734 type.go:165] "Request Body" body=""
	I1222 22:54:10.249966  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:10.250298  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:10.749734  146734 type.go:165] "Request Body" body=""
	I1222 22:54:10.749817  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:10.750166  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:11.249744  146734 type.go:165] "Request Body" body=""
	I1222 22:54:11.249820  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:11.250144  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:11.749770  146734 type.go:165] "Request Body" body=""
	I1222 22:54:11.749862  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:11.750201  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:12.249794  146734 type.go:165] "Request Body" body=""
	I1222 22:54:12.249873  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:12.250162  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:12.250235  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:12.750120  146734 type.go:165] "Request Body" body=""
	I1222 22:54:12.750215  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:12.750539  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:13.250225  146734 type.go:165] "Request Body" body=""
	I1222 22:54:13.250297  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:13.250634  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:13.750359  146734 type.go:165] "Request Body" body=""
	I1222 22:54:13.750457  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:13.750827  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:14.250483  146734 type.go:165] "Request Body" body=""
	I1222 22:54:14.250556  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:14.250898  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:14.250968  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:14.750562  146734 type.go:165] "Request Body" body=""
	I1222 22:54:14.750659  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:14.750987  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:15.250672  146734 type.go:165] "Request Body" body=""
	I1222 22:54:15.250773  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:15.251113  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:15.749701  146734 type.go:165] "Request Body" body=""
	I1222 22:54:15.749784  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:15.750120  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:16.249715  146734 type.go:165] "Request Body" body=""
	I1222 22:54:16.249790  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:16.250112  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:16.749747  146734 type.go:165] "Request Body" body=""
	I1222 22:54:16.749839  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:16.750218  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:16.750293  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:17.249782  146734 type.go:165] "Request Body" body=""
	I1222 22:54:17.249860  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:17.250177  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:17.750000  146734 type.go:165] "Request Body" body=""
	I1222 22:54:17.750088  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:17.750446  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:18.249735  146734 type.go:165] "Request Body" body=""
	I1222 22:54:18.249825  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:18.250179  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:18.749732  146734 type.go:165] "Request Body" body=""
	I1222 22:54:18.749816  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:18.750154  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:19.249741  146734 type.go:165] "Request Body" body=""
	I1222 22:54:19.249830  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:19.250196  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:19.250264  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:19.749730  146734 type.go:165] "Request Body" body=""
	I1222 22:54:19.749819  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:19.750126  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:20.249707  146734 type.go:165] "Request Body" body=""
	I1222 22:54:20.249785  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:20.250091  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:20.749724  146734 type.go:165] "Request Body" body=""
	I1222 22:54:20.749813  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:20.750150  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:21.249721  146734 type.go:165] "Request Body" body=""
	I1222 22:54:21.249797  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:21.250083  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:21.749694  146734 type.go:165] "Request Body" body=""
	I1222 22:54:21.749796  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:21.750142  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:21.750217  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:22.249754  146734 type.go:165] "Request Body" body=""
	I1222 22:54:22.249830  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:22.250160  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:22.750118  146734 type.go:165] "Request Body" body=""
	I1222 22:54:22.750196  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:22.750523  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:23.250322  146734 type.go:165] "Request Body" body=""
	I1222 22:54:23.250400  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:23.250767  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:23.750404  146734 type.go:165] "Request Body" body=""
	I1222 22:54:23.750488  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:23.750857  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:23.750933  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:24.249571  146734 type.go:165] "Request Body" body=""
	I1222 22:54:24.249681  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:24.250051  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:24.749643  146734 type.go:165] "Request Body" body=""
	I1222 22:54:24.749721  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:24.750066  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:25.249682  146734 type.go:165] "Request Body" body=""
	I1222 22:54:25.249768  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:25.250100  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:25.749662  146734 type.go:165] "Request Body" body=""
	I1222 22:54:25.749739  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:25.750065  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:26.249723  146734 type.go:165] "Request Body" body=""
	I1222 22:54:26.249806  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:26.250109  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:26.250210  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:26.749706  146734 type.go:165] "Request Body" body=""
	I1222 22:54:26.749790  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:26.750145  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:27.249715  146734 type.go:165] "Request Body" body=""
	I1222 22:54:27.249788  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:27.250030  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:27.749914  146734 type.go:165] "Request Body" body=""
	I1222 22:54:27.749988  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:27.750304  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:28.249902  146734 type.go:165] "Request Body" body=""
	I1222 22:54:28.249990  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:28.250337  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:28.250411  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:28.749874  146734 type.go:165] "Request Body" body=""
	I1222 22:54:28.749948  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:28.750244  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:29.249918  146734 type.go:165] "Request Body" body=""
	I1222 22:54:29.249996  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:29.250346  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:29.749881  146734 type.go:165] "Request Body" body=""
	I1222 22:54:29.749960  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:29.750287  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:30.249721  146734 type.go:165] "Request Body" body=""
	I1222 22:54:30.249800  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:30.250101  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:30.749693  146734 type.go:165] "Request Body" body=""
	I1222 22:54:30.749779  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:30.750112  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:30.750186  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:31.249759  146734 type.go:165] "Request Body" body=""
	I1222 22:54:31.249844  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:31.250182  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:31.749729  146734 type.go:165] "Request Body" body=""
	I1222 22:54:31.749814  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:31.750130  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:32.249782  146734 type.go:165] "Request Body" body=""
	I1222 22:54:32.249862  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:32.250186  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:32.750121  146734 type.go:165] "Request Body" body=""
	I1222 22:54:32.750207  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:32.750542  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:32.750634  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:33.249784  146734 type.go:165] "Request Body" body=""
	I1222 22:54:33.249855  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:33.250171  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:33.749810  146734 type.go:165] "Request Body" body=""
	I1222 22:54:33.749888  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:33.750215  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:34.249774  146734 type.go:165] "Request Body" body=""
	I1222 22:54:34.249851  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:34.250176  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:34.749708  146734 type.go:165] "Request Body" body=""
	I1222 22:54:34.749777  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:34.750090  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:35.249688  146734 type.go:165] "Request Body" body=""
	I1222 22:54:35.249766  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:35.250089  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:35.250148  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:35.749687  146734 type.go:165] "Request Body" body=""
	I1222 22:54:35.749759  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:35.750079  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:36.249639  146734 type.go:165] "Request Body" body=""
	I1222 22:54:36.249710  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:36.250032  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:36.749672  146734 type.go:165] "Request Body" body=""
	I1222 22:54:36.749746  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:36.750070  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:37.249695  146734 type.go:165] "Request Body" body=""
	I1222 22:54:37.249776  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:37.250115  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:37.250177  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:37.750002  146734 type.go:165] "Request Body" body=""
	I1222 22:54:37.750095  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:37.750456  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:38.250050  146734 type.go:165] "Request Body" body=""
	I1222 22:54:38.250125  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:38.250452  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:38.749992  146734 type.go:165] "Request Body" body=""
	I1222 22:54:38.750073  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:38.750425  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:39.249707  146734 type.go:165] "Request Body" body=""
	I1222 22:54:39.249774  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:39.250018  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:39.749673  146734 type.go:165] "Request Body" body=""
	I1222 22:54:39.749773  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:39.750117  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:39.750178  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:40.249716  146734 type.go:165] "Request Body" body=""
	I1222 22:54:40.249789  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:40.250105  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:40.749704  146734 type.go:165] "Request Body" body=""
	I1222 22:54:40.749797  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:40.750133  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:41.249799  146734 type.go:165] "Request Body" body=""
	I1222 22:54:41.249873  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:41.250194  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:41.749789  146734 type.go:165] "Request Body" body=""
	I1222 22:54:41.749883  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:41.750225  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:41.750287  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:42.249727  146734 type.go:165] "Request Body" body=""
	I1222 22:54:42.249800  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:42.250096  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:42.750174  146734 type.go:165] "Request Body" body=""
	I1222 22:54:42.750257  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:42.750651  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:43.250236  146734 type.go:165] "Request Body" body=""
	I1222 22:54:43.250313  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:43.250694  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:43.750289  146734 type.go:165] "Request Body" body=""
	I1222 22:54:43.750355  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:43.750686  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:43.750759  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:44.250285  146734 type.go:165] "Request Body" body=""
	I1222 22:54:44.250357  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:44.250709  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:44.750302  146734 type.go:165] "Request Body" body=""
	I1222 22:54:44.750384  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:44.750746  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:45.250350  146734 type.go:165] "Request Body" body=""
	I1222 22:54:45.250430  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:45.250764  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:45.750440  146734 type.go:165] "Request Body" body=""
	I1222 22:54:45.750515  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:45.750874  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:45.750949  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:46.250491  146734 type.go:165] "Request Body" body=""
	I1222 22:54:46.250567  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:46.250913  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:46.750576  146734 type.go:165] "Request Body" body=""
	I1222 22:54:46.750673  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:46.751016  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:47.249570  146734 type.go:165] "Request Body" body=""
	I1222 22:54:47.249660  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:47.249996  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:47.749731  146734 type.go:165] "Request Body" body=""
	I1222 22:54:47.749824  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:47.750169  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:48.249741  146734 type.go:165] "Request Body" body=""
	I1222 22:54:48.249822  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:48.250159  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:48.250226  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:48.749795  146734 type.go:165] "Request Body" body=""
	I1222 22:54:48.749894  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:48.750223  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:49.249848  146734 type.go:165] "Request Body" body=""
	I1222 22:54:49.249934  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:49.250267  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:49.749720  146734 type.go:165] "Request Body" body=""
	I1222 22:54:49.749804  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:49.750126  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:50.249670  146734 type.go:165] "Request Body" body=""
	I1222 22:54:50.249750  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:50.250079  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:50.749692  146734 type.go:165] "Request Body" body=""
	I1222 22:54:50.749766  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:50.750099  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:50.750164  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:51.249654  146734 type.go:165] "Request Body" body=""
	I1222 22:54:51.249734  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:51.250056  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:51.749698  146734 type.go:165] "Request Body" body=""
	I1222 22:54:51.749776  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:51.750104  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:52.249708  146734 type.go:165] "Request Body" body=""
	I1222 22:54:52.249803  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:52.250143  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:52.750168  146734 type.go:165] "Request Body" body=""
	I1222 22:54:52.750240  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:52.750636  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:52.750725  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:53.250283  146734 type.go:165] "Request Body" body=""
	I1222 22:54:53.250369  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:53.250696  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:53.750388  146734 type.go:165] "Request Body" body=""
	I1222 22:54:53.750469  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:53.750824  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:54.250460  146734 type.go:165] "Request Body" body=""
	I1222 22:54:54.250538  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:54.250871  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:54.750538  146734 type.go:165] "Request Body" body=""
	I1222 22:54:54.750645  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:54.750985  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:54.751057  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:55.249649  146734 type.go:165] "Request Body" body=""
	I1222 22:54:55.249732  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:55.250057  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:55.749707  146734 type.go:165] "Request Body" body=""
	I1222 22:54:55.749790  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:55.750108  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:56.249688  146734 type.go:165] "Request Body" body=""
	I1222 22:54:56.249766  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:56.250095  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:56.749726  146734 type.go:165] "Request Body" body=""
	I1222 22:54:56.749805  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:56.750146  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:57.249569  146734 type.go:165] "Request Body" body=""
	I1222 22:54:57.249701  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:57.250071  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:57.250148  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:57.750019  146734 type.go:165] "Request Body" body=""
	I1222 22:54:57.750094  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:57.750442  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:58.249983  146734 type.go:165] "Request Body" body=""
	I1222 22:54:58.250056  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:58.250388  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:58.749735  146734 type.go:165] "Request Body" body=""
	I1222 22:54:58.749813  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:58.750122  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:54:59.249693  146734 type.go:165] "Request Body" body=""
	I1222 22:54:59.249770  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:59.250112  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:54:59.250182  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:54:59.749735  146734 type.go:165] "Request Body" body=""
	I1222 22:54:59.749823  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:54:59.750156  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:00.249747  146734 type.go:165] "Request Body" body=""
	I1222 22:55:00.249832  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:00.250162  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:00.749730  146734 type.go:165] "Request Body" body=""
	I1222 22:55:00.749817  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:00.750139  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:01.249720  146734 type.go:165] "Request Body" body=""
	I1222 22:55:01.249796  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:01.250170  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:01.250234  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:01.749747  146734 type.go:165] "Request Body" body=""
	I1222 22:55:01.749834  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:01.750177  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:02.249773  146734 type.go:165] "Request Body" body=""
	I1222 22:55:02.249853  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:02.250190  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:02.750185  146734 type.go:165] "Request Body" body=""
	I1222 22:55:02.750272  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:02.750679  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:03.250410  146734 type.go:165] "Request Body" body=""
	I1222 22:55:03.250484  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:03.250800  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:03.250864  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:03.750490  146734 type.go:165] "Request Body" body=""
	I1222 22:55:03.750574  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:03.750953  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:04.250588  146734 type.go:165] "Request Body" body=""
	I1222 22:55:04.250700  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:04.251072  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:04.749564  146734 type.go:165] "Request Body" body=""
	I1222 22:55:04.749679  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:04.750072  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:05.249662  146734 type.go:165] "Request Body" body=""
	I1222 22:55:05.249748  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:05.250095  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:05.749749  146734 type.go:165] "Request Body" body=""
	I1222 22:55:05.749828  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:05.750162  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:05.750227  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:06.249723  146734 type.go:165] "Request Body" body=""
	I1222 22:55:06.249815  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:06.250126  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:06.749723  146734 type.go:165] "Request Body" body=""
	I1222 22:55:06.749809  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:06.750146  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:07.249732  146734 type.go:165] "Request Body" body=""
	I1222 22:55:07.249819  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:07.250155  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:07.749808  146734 type.go:165] "Request Body" body=""
	I1222 22:55:07.749885  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:07.750206  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:07.750279  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:08.249799  146734 type.go:165] "Request Body" body=""
	I1222 22:55:08.249870  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:08.250200  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:08.749784  146734 type.go:165] "Request Body" body=""
	I1222 22:55:08.749858  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:08.750157  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:09.249830  146734 type.go:165] "Request Body" body=""
	I1222 22:55:09.249933  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:09.250259  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:09.749895  146734 type.go:165] "Request Body" body=""
	I1222 22:55:09.749973  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:09.750307  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:09.750375  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:10.249899  146734 type.go:165] "Request Body" body=""
	I1222 22:55:10.249974  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:10.250316  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:10.749711  146734 type.go:165] "Request Body" body=""
	I1222 22:55:10.749786  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:10.750166  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:11.249748  146734 type.go:165] "Request Body" body=""
	I1222 22:55:11.249819  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:11.250148  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:11.749856  146734 type.go:165] "Request Body" body=""
	I1222 22:55:11.749994  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:11.750335  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:12.249957  146734 type.go:165] "Request Body" body=""
	I1222 22:55:12.250025  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:12.250328  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:12.250386  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:12.750340  146734 type.go:165] "Request Body" body=""
	I1222 22:55:12.750433  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:12.750899  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:13.250509  146734 type.go:165] "Request Body" body=""
	I1222 22:55:13.250620  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:13.250953  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:13.750574  146734 type.go:165] "Request Body" body=""
	I1222 22:55:13.750664  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:13.750913  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:14.249616  146734 type.go:165] "Request Body" body=""
	I1222 22:55:14.249702  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:14.250052  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:14.749677  146734 type.go:165] "Request Body" body=""
	I1222 22:55:14.749762  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:14.750119  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:14.750178  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:15.249707  146734 type.go:165] "Request Body" body=""
	I1222 22:55:15.249782  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:15.250113  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:15.749689  146734 type.go:165] "Request Body" body=""
	I1222 22:55:15.749763  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:15.750110  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:16.249711  146734 type.go:165] "Request Body" body=""
	I1222 22:55:16.249796  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:16.250119  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:16.749704  146734 type.go:165] "Request Body" body=""
	I1222 22:55:16.749779  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:16.750098  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:17.249718  146734 type.go:165] "Request Body" body=""
	I1222 22:55:17.249799  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:17.250129  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:17.250204  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:17.749909  146734 type.go:165] "Request Body" body=""
	I1222 22:55:17.750010  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:17.750386  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:18.250005  146734 type.go:165] "Request Body" body=""
	I1222 22:55:18.250097  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:18.250519  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:18.750237  146734 type.go:165] "Request Body" body=""
	I1222 22:55:18.750318  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:18.750671  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:19.250360  146734 type.go:165] "Request Body" body=""
	I1222 22:55:19.250435  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:19.250782  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:19.250850  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:19.750393  146734 type.go:165] "Request Body" body=""
	I1222 22:55:19.750473  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:19.750812  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:20.250244  146734 type.go:165] "Request Body" body=""
	I1222 22:55:20.250340  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:20.250766  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:20.749617  146734 type.go:165] "Request Body" body=""
	I1222 22:55:20.749731  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:20.750232  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:21.249738  146734 type.go:165] "Request Body" body=""
	I1222 22:55:21.249818  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:21.250145  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:21.749880  146734 type.go:165] "Request Body" body=""
	I1222 22:55:21.749962  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:21.750345  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:21.750422  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:22.249651  146734 type.go:165] "Request Body" body=""
	I1222 22:55:22.249745  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:22.250098  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:22.750064  146734 type.go:165] "Request Body" body=""
	I1222 22:55:22.750133  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:22.750512  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:23.250053  146734 type.go:165] "Request Body" body=""
	I1222 22:55:23.250126  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:23.250447  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:23.750003  146734 type.go:165] "Request Body" body=""
	I1222 22:55:23.750079  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:23.750487  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:23.750580  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:24.249755  146734 type.go:165] "Request Body" body=""
	I1222 22:55:24.249827  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:24.250165  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:24.749758  146734 type.go:165] "Request Body" body=""
	I1222 22:55:24.749828  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:24.750192  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:25.249696  146734 type.go:165] "Request Body" body=""
	I1222 22:55:25.249769  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:25.250072  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:25.749697  146734 type.go:165] "Request Body" body=""
	I1222 22:55:25.749783  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:25.750159  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:26.249886  146734 type.go:165] "Request Body" body=""
	I1222 22:55:26.249958  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:26.250275  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:26.250336  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:26.750067  146734 type.go:165] "Request Body" body=""
	I1222 22:55:26.750154  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:26.750517  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:27.250329  146734 type.go:165] "Request Body" body=""
	I1222 22:55:27.250410  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:27.250697  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:27.750580  146734 type.go:165] "Request Body" body=""
	I1222 22:55:27.750669  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:27.751022  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:28.249726  146734 type.go:165] "Request Body" body=""
	I1222 22:55:28.249808  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:28.250109  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:28.749730  146734 type.go:165] "Request Body" body=""
	I1222 22:55:28.749811  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:28.750156  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:28.750237  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:29.249909  146734 type.go:165] "Request Body" body=""
	I1222 22:55:29.249982  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:29.250305  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:29.750035  146734 type.go:165] "Request Body" body=""
	I1222 22:55:29.750108  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:29.750450  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:30.250215  146734 type.go:165] "Request Body" body=""
	I1222 22:55:30.250285  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:30.250646  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:30.750488  146734 type.go:165] "Request Body" body=""
	I1222 22:55:30.750567  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:30.750921  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:30.750983  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:31.249701  146734 type.go:165] "Request Body" body=""
	I1222 22:55:31.249780  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:31.250099  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:31.749724  146734 type.go:165] "Request Body" body=""
	I1222 22:55:31.749792  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:31.750145  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:32.249871  146734 type.go:165] "Request Body" body=""
	I1222 22:55:32.249942  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:32.250315  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:32.750300  146734 type.go:165] "Request Body" body=""
	I1222 22:55:32.750384  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:32.750771  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:33.250608  146734 type.go:165] "Request Body" body=""
	I1222 22:55:33.250702  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:33.251142  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:33.251214  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:33.749937  146734 type.go:165] "Request Body" body=""
	I1222 22:55:33.750014  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:33.750358  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:34.250212  146734 type.go:165] "Request Body" body=""
	I1222 22:55:34.250294  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:34.250661  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:34.750449  146734 type.go:165] "Request Body" body=""
	I1222 22:55:34.750521  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:34.750895  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:35.249645  146734 type.go:165] "Request Body" body=""
	I1222 22:55:35.249716  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:35.250054  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:35.749839  146734 type.go:165] "Request Body" body=""
	I1222 22:55:35.749916  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:35.750258  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:35.750321  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:36.249744  146734 type.go:165] "Request Body" body=""
	I1222 22:55:36.249815  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:36.250128  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:36.749704  146734 type.go:165] "Request Body" body=""
	I1222 22:55:36.749780  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:36.750111  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:37.249872  146734 type.go:165] "Request Body" body=""
	I1222 22:55:37.249947  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:37.250270  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:37.750107  146734 type.go:165] "Request Body" body=""
	I1222 22:55:37.750195  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:37.750537  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:37.750607  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:38.250386  146734 type.go:165] "Request Body" body=""
	I1222 22:55:38.250473  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:38.250844  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:38.749618  146734 type.go:165] "Request Body" body=""
	I1222 22:55:38.749699  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:38.750037  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:39.249712  146734 type.go:165] "Request Body" body=""
	I1222 22:55:39.249799  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:39.250114  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:39.749869  146734 type.go:165] "Request Body" body=""
	I1222 22:55:39.749958  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:39.750319  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:40.249717  146734 type.go:165] "Request Body" body=""
	I1222 22:55:40.249789  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:40.250125  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:40.250192  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:40.749723  146734 type.go:165] "Request Body" body=""
	I1222 22:55:40.749805  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:40.750122  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:41.249867  146734 type.go:165] "Request Body" body=""
	I1222 22:55:41.249964  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:41.250300  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:41.750030  146734 type.go:165] "Request Body" body=""
	I1222 22:55:41.750104  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:41.750449  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:42.250237  146734 type.go:165] "Request Body" body=""
	I1222 22:55:42.250316  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:42.250702  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:42.250790  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:42.749698  146734 type.go:165] "Request Body" body=""
	I1222 22:55:42.749778  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:42.750156  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:43.249908  146734 type.go:165] "Request Body" body=""
	I1222 22:55:43.249984  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:43.250316  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:43.749713  146734 type.go:165] "Request Body" body=""
	I1222 22:55:43.749783  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:43.750109  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:44.249847  146734 type.go:165] "Request Body" body=""
	I1222 22:55:44.249920  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:44.250250  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:44.749977  146734 type.go:165] "Request Body" body=""
	I1222 22:55:44.750059  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:44.750429  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:44.750493  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:45.250259  146734 type.go:165] "Request Body" body=""
	I1222 22:55:45.250340  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:45.250671  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:45.750529  146734 type.go:165] "Request Body" body=""
	I1222 22:55:45.750635  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:45.750999  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:46.249762  146734 type.go:165] "Request Body" body=""
	I1222 22:55:46.249839  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:46.250179  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:46.749734  146734 type.go:165] "Request Body" body=""
	I1222 22:55:46.749810  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:46.750159  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:47.249888  146734 type.go:165] "Request Body" body=""
	I1222 22:55:47.249964  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:47.250293  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:47.250367  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:47.750084  146734 type.go:165] "Request Body" body=""
	I1222 22:55:47.750158  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:47.750514  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:48.250329  146734 type.go:165] "Request Body" body=""
	I1222 22:55:48.250397  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:48.250735  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:48.750528  146734 type.go:165] "Request Body" body=""
	I1222 22:55:48.750629  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:48.750960  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:49.249711  146734 type.go:165] "Request Body" body=""
	I1222 22:55:49.249788  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:49.250109  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:49.749756  146734 type.go:165] "Request Body" body=""
	I1222 22:55:49.749840  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:49.750202  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:49.750280  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:50.250567  146734 type.go:165] "Request Body" body=""
	I1222 22:55:50.250668  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:50.251016  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:50.749759  146734 type.go:165] "Request Body" body=""
	I1222 22:55:50.749830  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:50.750146  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:51.249720  146734 type.go:165] "Request Body" body=""
	I1222 22:55:51.249810  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:51.250134  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:51.749846  146734 type.go:165] "Request Body" body=""
	I1222 22:55:51.749921  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:51.750215  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:52.249929  146734 type.go:165] "Request Body" body=""
	I1222 22:55:52.250026  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:52.250348  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:52.250418  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:52.750253  146734 type.go:165] "Request Body" body=""
	I1222 22:55:52.750351  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:52.750714  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:53.250551  146734 type.go:165] "Request Body" body=""
	I1222 22:55:53.250675  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:53.251019  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:53.749793  146734 type.go:165] "Request Body" body=""
	I1222 22:55:53.749867  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:53.750205  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:54.249741  146734 type.go:165] "Request Body" body=""
	I1222 22:55:54.249830  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:54.250172  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:54.749935  146734 type.go:165] "Request Body" body=""
	I1222 22:55:54.750018  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:54.750343  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:54.750406  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:55.250174  146734 type.go:165] "Request Body" body=""
	I1222 22:55:55.250255  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:55.250582  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:55.750396  146734 type.go:165] "Request Body" body=""
	I1222 22:55:55.750466  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:55.750800  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:56.250589  146734 type.go:165] "Request Body" body=""
	I1222 22:55:56.250683  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:56.251003  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:56.749740  146734 type.go:165] "Request Body" body=""
	I1222 22:55:56.749822  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:56.750149  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:57.249700  146734 type.go:165] "Request Body" body=""
	I1222 22:55:57.249767  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:57.250019  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:57.250065  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:57.749911  146734 type.go:165] "Request Body" body=""
	I1222 22:55:57.749984  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:57.750312  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:58.250079  146734 type.go:165] "Request Body" body=""
	I1222 22:55:58.250158  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:58.250482  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:58.749773  146734 type.go:165] "Request Body" body=""
	I1222 22:55:58.749843  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:58.750137  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:55:59.249988  146734 type.go:165] "Request Body" body=""
	I1222 22:55:59.250074  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:59.250366  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:55:59.250416  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:55:59.750170  146734 type.go:165] "Request Body" body=""
	I1222 22:55:59.750247  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:55:59.750648  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:00.250540  146734 type.go:165] "Request Body" body=""
	I1222 22:56:00.250654  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:00.250961  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:00.749739  146734 type.go:165] "Request Body" body=""
	I1222 22:56:00.749823  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:00.750177  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:01.249895  146734 type.go:165] "Request Body" body=""
	I1222 22:56:01.249986  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:01.250340  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:01.750038  146734 type.go:165] "Request Body" body=""
	I1222 22:56:01.750113  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:01.750490  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:01.750581  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:02.250443  146734 type.go:165] "Request Body" body=""
	I1222 22:56:02.250525  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:02.250895  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:02.749821  146734 type.go:165] "Request Body" body=""
	I1222 22:56:02.749917  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:02.750273  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:03.250163  146734 type.go:165] "Request Body" body=""
	I1222 22:56:03.250251  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:03.250624  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:03.750411  146734 type.go:165] "Request Body" body=""
	I1222 22:56:03.750512  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:03.750893  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:03.750957  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:04.249658  146734 type.go:165] "Request Body" body=""
	I1222 22:56:04.249737  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:04.250088  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:04.749706  146734 type.go:165] "Request Body" body=""
	I1222 22:56:04.749798  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:04.750106  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:05.249956  146734 type.go:165] "Request Body" body=""
	I1222 22:56:05.250031  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:05.250365  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:05.750235  146734 type.go:165] "Request Body" body=""
	I1222 22:56:05.750322  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:05.750688  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:06.250487  146734 type.go:165] "Request Body" body=""
	I1222 22:56:06.250559  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:06.250842  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:06.250893  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:06.749620  146734 type.go:165] "Request Body" body=""
	I1222 22:56:06.749705  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:06.750048  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:07.249785  146734 type.go:165] "Request Body" body=""
	I1222 22:56:07.249875  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:07.250216  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:07.750003  146734 type.go:165] "Request Body" body=""
	I1222 22:56:07.750078  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:07.750408  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:08.250236  146734 type.go:165] "Request Body" body=""
	I1222 22:56:08.250327  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:08.250694  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:08.750527  146734 type.go:165] "Request Body" body=""
	I1222 22:56:08.750631  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:08.750990  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:08.751068  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:09.249747  146734 type.go:165] "Request Body" body=""
	I1222 22:56:09.249842  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:09.250172  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:09.749886  146734 type.go:165] "Request Body" body=""
	I1222 22:56:09.749962  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:09.750308  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:10.249647  146734 type.go:165] "Request Body" body=""
	I1222 22:56:10.249737  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:10.250057  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:10.749709  146734 type.go:165] "Request Body" body=""
	I1222 22:56:10.749783  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:10.750127  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:11.249845  146734 type.go:165] "Request Body" body=""
	I1222 22:56:11.249934  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:11.250260  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:11.250328  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:11.749676  146734 type.go:165] "Request Body" body=""
	I1222 22:56:11.749752  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:11.750075  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:12.249809  146734 type.go:165] "Request Body" body=""
	I1222 22:56:12.249905  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:12.250221  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:12.750192  146734 type.go:165] "Request Body" body=""
	I1222 22:56:12.750274  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:12.750624  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:13.250511  146734 type.go:165] "Request Body" body=""
	I1222 22:56:13.250619  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:13.250949  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:13.251021  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:13.749706  146734 type.go:165] "Request Body" body=""
	I1222 22:56:13.749791  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:13.750112  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:14.249837  146734 type.go:165] "Request Body" body=""
	I1222 22:56:14.249926  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:14.250269  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:14.749994  146734 type.go:165] "Request Body" body=""
	I1222 22:56:14.750085  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:14.750445  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:15.250241  146734 type.go:165] "Request Body" body=""
	I1222 22:56:15.250328  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:15.250679  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:15.750491  146734 type.go:165] "Request Body" body=""
	I1222 22:56:15.750582  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:15.750940  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:15.751003  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:16.249702  146734 type.go:165] "Request Body" body=""
	I1222 22:56:16.249792  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:16.250131  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:16.749842  146734 type.go:165] "Request Body" body=""
	I1222 22:56:16.749926  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:16.750230  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:17.249944  146734 type.go:165] "Request Body" body=""
	I1222 22:56:17.250027  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:17.250345  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:17.750037  146734 type.go:165] "Request Body" body=""
	I1222 22:56:17.750126  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:17.750464  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:18.249806  146734 type.go:165] "Request Body" body=""
	I1222 22:56:18.249894  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:18.250221  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:18.250280  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:18.749968  146734 type.go:165] "Request Body" body=""
	I1222 22:56:18.750042  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:18.750375  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:19.250188  146734 type.go:165] "Request Body" body=""
	I1222 22:56:19.250274  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:19.250616  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:19.750444  146734 type.go:165] "Request Body" body=""
	I1222 22:56:19.750535  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:19.750879  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:20.249610  146734 type.go:165] "Request Body" body=""
	I1222 22:56:20.249709  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:20.250048  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:20.749785  146734 type.go:165] "Request Body" body=""
	I1222 22:56:20.749875  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:20.750205  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:20.750279  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:21.249734  146734 type.go:165] "Request Body" body=""
	I1222 22:56:21.249827  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:21.250200  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:21.749682  146734 type.go:165] "Request Body" body=""
	I1222 22:56:21.749765  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:21.750114  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:22.249836  146734 type.go:165] "Request Body" body=""
	I1222 22:56:22.249924  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:22.250264  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:22.750260  146734 type.go:165] "Request Body" body=""
	I1222 22:56:22.750374  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:22.750715  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:22.750789  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:23.250585  146734 type.go:165] "Request Body" body=""
	I1222 22:56:23.250698  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:23.251037  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:23.749764  146734 type.go:165] "Request Body" body=""
	I1222 22:56:23.749845  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:23.750177  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:24.249961  146734 type.go:165] "Request Body" body=""
	I1222 22:56:24.250049  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:24.250374  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:24.750225  146734 type.go:165] "Request Body" body=""
	I1222 22:56:24.750310  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:24.750702  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:25.250611  146734 type.go:165] "Request Body" body=""
	I1222 22:56:25.250705  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:25.251045  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:25.251126  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:25.749728  146734 type.go:165] "Request Body" body=""
	I1222 22:56:25.749801  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:25.750109  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:26.249827  146734 type.go:165] "Request Body" body=""
	I1222 22:56:26.249913  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:26.250265  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:26.750059  146734 type.go:165] "Request Body" body=""
	I1222 22:56:26.750142  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:26.750486  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:27.250307  146734 type.go:165] "Request Body" body=""
	I1222 22:56:27.250390  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:27.250801  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:27.749556  146734 type.go:165] "Request Body" body=""
	I1222 22:56:27.749670  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:27.749997  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:27.750062  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:28.249747  146734 type.go:165] "Request Body" body=""
	I1222 22:56:28.249843  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:28.250181  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:28.749928  146734 type.go:165] "Request Body" body=""
	I1222 22:56:28.750013  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:28.750333  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:29.250176  146734 type.go:165] "Request Body" body=""
	I1222 22:56:29.250253  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:29.250628  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:29.750430  146734 type.go:165] "Request Body" body=""
	I1222 22:56:29.750516  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:29.750880  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:29.750941  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:30.249655  146734 type.go:165] "Request Body" body=""
	I1222 22:56:30.249745  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:30.250057  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:30.749762  146734 type.go:165] "Request Body" body=""
	I1222 22:56:30.749841  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:30.750176  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:31.249900  146734 type.go:165] "Request Body" body=""
	I1222 22:56:31.249991  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:31.250332  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:31.749728  146734 type.go:165] "Request Body" body=""
	I1222 22:56:31.749800  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:31.750161  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:32.249910  146734 type.go:165] "Request Body" body=""
	I1222 22:56:32.249987  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:32.250372  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:32.250439  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:32.750202  146734 type.go:165] "Request Body" body=""
	I1222 22:56:32.750290  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:32.750667  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:33.250449  146734 type.go:165] "Request Body" body=""
	I1222 22:56:33.250524  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:33.250855  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:33.749570  146734 type.go:165] "Request Body" body=""
	I1222 22:56:33.749674  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:33.750002  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:34.249728  146734 type.go:165] "Request Body" body=""
	I1222 22:56:34.249812  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:34.250144  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:34.749746  146734 type.go:165] "Request Body" body=""
	I1222 22:56:34.749820  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:34.750119  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:34.750176  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:35.249857  146734 type.go:165] "Request Body" body=""
	I1222 22:56:35.249936  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:35.250259  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:35.750048  146734 type.go:165] "Request Body" body=""
	I1222 22:56:35.750143  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:35.750524  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:36.250349  146734 type.go:165] "Request Body" body=""
	I1222 22:56:36.250426  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:36.250769  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:36.750203  146734 type.go:165] "Request Body" body=""
	I1222 22:56:36.750279  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:36.750663  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:36.750725  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:37.250497  146734 type.go:165] "Request Body" body=""
	I1222 22:56:37.250572  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:37.251039  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:37.749807  146734 type.go:165] "Request Body" body=""
	I1222 22:56:37.749884  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:37.750203  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:38.249961  146734 type.go:165] "Request Body" body=""
	I1222 22:56:38.250044  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:38.250402  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:38.750242  146734 type.go:165] "Request Body" body=""
	I1222 22:56:38.750337  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:38.750714  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:38.750783  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:39.249573  146734 type.go:165] "Request Body" body=""
	I1222 22:56:39.249685  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:39.250006  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:39.749755  146734 type.go:165] "Request Body" body=""
	I1222 22:56:39.749837  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:39.750176  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:40.249985  146734 type.go:165] "Request Body" body=""
	I1222 22:56:40.250068  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:40.250462  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:40.749791  146734 type.go:165] "Request Body" body=""
	I1222 22:56:40.749864  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:40.750168  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:41.249972  146734 type.go:165] "Request Body" body=""
	I1222 22:56:41.250067  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:41.250448  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:41.250511  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:41.750285  146734 type.go:165] "Request Body" body=""
	I1222 22:56:41.750360  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:41.750722  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:42.250530  146734 type.go:165] "Request Body" body=""
	I1222 22:56:42.250634  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:42.251005  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:42.750176  146734 type.go:165] "Request Body" body=""
	I1222 22:56:42.750262  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:42.750629  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:43.250421  146734 type.go:165] "Request Body" body=""
	I1222 22:56:43.250500  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:43.250876  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:43.250943  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:43.749662  146734 type.go:165] "Request Body" body=""
	I1222 22:56:43.749749  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:43.750068  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:44.249733  146734 type.go:165] "Request Body" body=""
	I1222 22:56:44.249816  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:44.250156  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:44.749892  146734 type.go:165] "Request Body" body=""
	I1222 22:56:44.749976  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:44.750375  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:45.249835  146734 type.go:165] "Request Body" body=""
	I1222 22:56:45.249925  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:45.250244  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:45.750013  146734 type.go:165] "Request Body" body=""
	I1222 22:56:45.750091  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:45.750446  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:45.750514  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:46.250285  146734 type.go:165] "Request Body" body=""
	I1222 22:56:46.250360  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:46.250755  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:46.749559  146734 type.go:165] "Request Body" body=""
	I1222 22:56:46.749670  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:46.750010  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:47.249802  146734 type.go:165] "Request Body" body=""
	I1222 22:56:47.249878  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:47.250237  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:47.749902  146734 type.go:165] "Request Body" body=""
	I1222 22:56:47.750019  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:47.750394  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:48.250358  146734 type.go:165] "Request Body" body=""
	I1222 22:56:48.250462  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:48.250882  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:48.250970  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:48.749733  146734 type.go:165] "Request Body" body=""
	I1222 22:56:48.749822  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:48.750138  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:49.249857  146734 type.go:165] "Request Body" body=""
	I1222 22:56:49.249934  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:49.250272  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:49.750058  146734 type.go:165] "Request Body" body=""
	I1222 22:56:49.750161  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:49.750546  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:50.250534  146734 type.go:165] "Request Body" body=""
	I1222 22:56:50.250637  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:50.250970  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:50.251043  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:50.749768  146734 type.go:165] "Request Body" body=""
	I1222 22:56:50.749857  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:50.750193  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:51.250007  146734 type.go:165] "Request Body" body=""
	I1222 22:56:51.250097  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:51.250480  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:51.750285  146734 type.go:165] "Request Body" body=""
	I1222 22:56:51.750367  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:51.750694  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:52.250507  146734 type.go:165] "Request Body" body=""
	I1222 22:56:52.250580  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:52.250924  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:52.749846  146734 type.go:165] "Request Body" body=""
	I1222 22:56:52.749917  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:52.750231  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:52.750289  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:53.250037  146734 type.go:165] "Request Body" body=""
	I1222 22:56:53.250116  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:53.250450  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:53.750301  146734 type.go:165] "Request Body" body=""
	I1222 22:56:53.750379  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:53.750711  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:54.250570  146734 type.go:165] "Request Body" body=""
	I1222 22:56:54.250683  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:54.251037  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:54.749778  146734 type.go:165] "Request Body" body=""
	I1222 22:56:54.749870  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:54.750212  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:55.249925  146734 type.go:165] "Request Body" body=""
	I1222 22:56:55.250005  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:55.250325  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:55.250383  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:55.750056  146734 type.go:165] "Request Body" body=""
	I1222 22:56:55.750129  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:55.750431  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:56.250241  146734 type.go:165] "Request Body" body=""
	I1222 22:56:56.250315  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:56.250691  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:56.750506  146734 type.go:165] "Request Body" body=""
	I1222 22:56:56.750582  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:56.750942  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:57.249670  146734 type.go:165] "Request Body" body=""
	I1222 22:56:57.249740  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:57.250040  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:57.749841  146734 type.go:165] "Request Body" body=""
	I1222 22:56:57.749915  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:57.750251  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:57.750326  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:56:58.249993  146734 type.go:165] "Request Body" body=""
	I1222 22:56:58.250069  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:58.250401  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:58.749827  146734 type.go:165] "Request Body" body=""
	I1222 22:56:58.749905  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:58.750192  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:59.249907  146734 type.go:165] "Request Body" body=""
	I1222 22:56:59.249979  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:59.250306  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:56:59.750056  146734 type.go:165] "Request Body" body=""
	I1222 22:56:59.750136  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:56:59.750491  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:56:59.750565  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:57:00.250323  146734 type.go:165] "Request Body" body=""
	I1222 22:57:00.250408  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:00.250755  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:00.749631  146734 type.go:165] "Request Body" body=""
	I1222 22:57:00.749707  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:00.750021  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:01.249677  146734 type.go:165] "Request Body" body=""
	I1222 22:57:01.249762  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:01.250096  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:01.749829  146734 type.go:165] "Request Body" body=""
	I1222 22:57:01.749919  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:01.750234  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:02.249941  146734 type.go:165] "Request Body" body=""
	I1222 22:57:02.250014  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:02.250348  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:57:02.250421  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:57:02.750212  146734 type.go:165] "Request Body" body=""
	I1222 22:57:02.750300  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:02.750654  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:03.250450  146734 type.go:165] "Request Body" body=""
	I1222 22:57:03.250517  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:03.250851  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:03.749576  146734 type.go:165] "Request Body" body=""
	I1222 22:57:03.749665  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:03.749988  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:04.249767  146734 type.go:165] "Request Body" body=""
	I1222 22:57:04.249842  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:04.250173  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:04.750033  146734 type.go:165] "Request Body" body=""
	I1222 22:57:04.750117  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:04.750502  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:57:04.750569  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:57:05.250312  146734 type.go:165] "Request Body" body=""
	I1222 22:57:05.250398  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:05.250762  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:05.750575  146734 type.go:165] "Request Body" body=""
	I1222 22:57:05.750666  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:05.751012  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:06.249706  146734 type.go:165] "Request Body" body=""
	I1222 22:57:06.249781  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:06.250093  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:06.749823  146734 type.go:165] "Request Body" body=""
	I1222 22:57:06.749898  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:06.750282  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:07.250051  146734 type.go:165] "Request Body" body=""
	I1222 22:57:07.250124  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:07.250473  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:57:07.250533  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:57:07.750214  146734 type.go:165] "Request Body" body=""
	I1222 22:57:07.750298  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:07.750580  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:08.250395  146734 type.go:165] "Request Body" body=""
	I1222 22:57:08.250486  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:08.250871  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:08.749688  146734 type.go:165] "Request Body" body=""
	I1222 22:57:08.749763  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:08.750089  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:09.249707  146734 type.go:165] "Request Body" body=""
	I1222 22:57:09.249788  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:09.250145  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:09.749900  146734 type.go:165] "Request Body" body=""
	I1222 22:57:09.749983  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:09.750354  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:57:09.750431  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:57:10.250176  146734 type.go:165] "Request Body" body=""
	I1222 22:57:10.250252  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:10.250587  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:10.750397  146734 type.go:165] "Request Body" body=""
	I1222 22:57:10.750472  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:10.750832  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:11.249645  146734 type.go:165] "Request Body" body=""
	I1222 22:57:11.249739  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:11.250107  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:11.749864  146734 type.go:165] "Request Body" body=""
	I1222 22:57:11.749962  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:11.750316  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:12.249663  146734 type.go:165] "Request Body" body=""
	I1222 22:57:12.249748  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:12.250105  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:57:12.250178  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:57:12.750096  146734 type.go:165] "Request Body" body=""
	I1222 22:57:12.750174  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:12.750521  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:13.250403  146734 type.go:165] "Request Body" body=""
	I1222 22:57:13.250481  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:13.250854  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:13.749624  146734 type.go:165] "Request Body" body=""
	I1222 22:57:13.749717  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:13.750062  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:14.249768  146734 type.go:165] "Request Body" body=""
	I1222 22:57:14.249842  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:14.250173  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:57:14.250237  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:57:14.749931  146734 type.go:165] "Request Body" body=""
	I1222 22:57:14.750016  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:14.750331  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:15.250077  146734 type.go:165] "Request Body" body=""
	I1222 22:57:15.250149  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:15.250459  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:15.750281  146734 type.go:165] "Request Body" body=""
	I1222 22:57:15.750366  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:15.750687  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:16.250490  146734 type.go:165] "Request Body" body=""
	I1222 22:57:16.250582  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:16.250949  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:57:16.251012  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:57:16.749742  146734 type.go:165] "Request Body" body=""
	I1222 22:57:16.749829  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:16.750173  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:17.249915  146734 type.go:165] "Request Body" body=""
	I1222 22:57:17.250011  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:17.250323  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:17.750089  146734 type.go:165] "Request Body" body=""
	I1222 22:57:17.750167  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:17.750505  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:18.250322  146734 type.go:165] "Request Body" body=""
	I1222 22:57:18.250402  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:18.250802  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:18.749588  146734 type.go:165] "Request Body" body=""
	I1222 22:57:18.749719  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:18.750078  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:57:18.750142  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:57:19.249857  146734 type.go:165] "Request Body" body=""
	I1222 22:57:19.249933  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:19.250256  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:19.749770  146734 type.go:165] "Request Body" body=""
	I1222 22:57:19.749854  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:19.750208  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:20.249748  146734 type.go:165] "Request Body" body=""
	I1222 22:57:20.249831  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:20.250157  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:20.750100  146734 type.go:165] "Request Body" body=""
	I1222 22:57:20.750194  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:20.750870  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1222 22:57:20.750943  146734 node_ready.go:55] error getting node "functional-384766" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-384766": dial tcp 192.168.49.2:8441: connect: connection refused
	I1222 22:57:21.249638  146734 type.go:165] "Request Body" body=""
	I1222 22:57:21.249718  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:21.250054  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:21.749626  146734 type.go:165] "Request Body" body=""
	I1222 22:57:21.749701  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:21.750025  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:22.249683  146734 type.go:165] "Request Body" body=""
	I1222 22:57:22.249766  146734 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-384766" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1222 22:57:22.250110  146734 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1222 22:57:22.750117  146734 node_ready.go:38] duration metric: took 6m0.000675026s for node "functional-384766" to be "Ready" ...
	I1222 22:57:22.752685  146734 out.go:203] 
	W1222 22:57:22.753745  146734 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1222 22:57:22.753760  146734 out.go:285] * 
	W1222 22:57:22.753991  146734 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 22:57:22.755053  146734 out.go:203] 
	
	
	==> Docker <==
	Dec 22 22:51:20 functional-384766 dockerd[9922]: time="2025-12-22T22:51:20.767074805Z" level=info msg="Loading containers: done."
	Dec 22 22:51:20 functional-384766 dockerd[9922]: time="2025-12-22T22:51:20.776636662Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 22 22:51:20 functional-384766 dockerd[9922]: time="2025-12-22T22:51:20.776667996Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 22 22:51:20 functional-384766 dockerd[9922]: time="2025-12-22T22:51:20.776702670Z" level=info msg="Initializing buildkit"
	Dec 22 22:51:20 functional-384766 dockerd[9922]: time="2025-12-22T22:51:20.795232210Z" level=info msg="Completed buildkit initialization"
	Dec 22 22:51:20 functional-384766 dockerd[9922]: time="2025-12-22T22:51:20.799403213Z" level=info msg="Daemon has completed initialization"
	Dec 22 22:51:20 functional-384766 dockerd[9922]: time="2025-12-22T22:51:20.799466264Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 22 22:51:20 functional-384766 dockerd[9922]: time="2025-12-22T22:51:20.799532589Z" level=info msg="API listen on [::]:2376"
	Dec 22 22:51:20 functional-384766 dockerd[9922]: time="2025-12-22T22:51:20.799493974Z" level=info msg="API listen on /run/docker.sock"
	Dec 22 22:51:20 functional-384766 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 22 22:51:20 functional-384766 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 22 22:51:20 functional-384766 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 22 22:51:20 functional-384766 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 22 22:51:21 functional-384766 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 22 22:51:21 functional-384766 cri-dockerd[10237]: time="2025-12-22T22:51:21Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 22 22:51:21 functional-384766 cri-dockerd[10237]: time="2025-12-22T22:51:21Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 22 22:51:21 functional-384766 cri-dockerd[10237]: time="2025-12-22T22:51:21Z" level=info msg="Start docker client with request timeout 0s"
	Dec 22 22:51:21 functional-384766 cri-dockerd[10237]: time="2025-12-22T22:51:21Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 22 22:51:21 functional-384766 cri-dockerd[10237]: time="2025-12-22T22:51:21Z" level=info msg="Loaded network plugin cni"
	Dec 22 22:51:21 functional-384766 cri-dockerd[10237]: time="2025-12-22T22:51:21Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 22 22:51:21 functional-384766 cri-dockerd[10237]: time="2025-12-22T22:51:21Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 22 22:51:21 functional-384766 cri-dockerd[10237]: time="2025-12-22T22:51:21Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 22 22:51:21 functional-384766 cri-dockerd[10237]: time="2025-12-22T22:51:21Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 22 22:51:21 functional-384766 cri-dockerd[10237]: time="2025-12-22T22:51:21Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 22 22:51:21 functional-384766 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:57:35.512796   17503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:57:35.513364   17503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:57:35.515101   17503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:57:35.515643   17503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:57:35.517311   17503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff da 9e 7f a3 27 cb 08 06
	[  +0.239045] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6e eb f7 fd 0a 48 08 06
	[  +0.170967] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 16 5a dc 65 fc cc 08 06
	[Dec22 22:37] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 66 cb ee 90 55 2b 08 06
	[  +0.000450] IPv4: martian source 10.244.0.32 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff be 43 50 0c dd 15 08 06
	[  +0.000658] IPv4: martian source 10.244.0.32 from 10.244.0.7, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e 41 3c 76 23 2b 08 06
	[  +1.709294] IPv4: martian source 10.244.0.31 from 10.244.0.26, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff be b6 30 85 5f 4e 08 06
	[  +0.532867] IPv4: martian source 10.244.0.26 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff be 43 50 0c dd 15 08 06
	[Dec22 22:39] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 46 b7 49 09 f9 e0 08 06
	[  +0.006417] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1e e5 c5 4f 67 2b 08 06
	[Dec22 22:40] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 22 2e 10 70 70 25 08 06
	[Dec22 22:41] IPv4: martian source 10.244.0.1 from 10.244.0.6, on dev eth0
	[  +0.000034] ll header: 00000000: ff ff ff ff ff ff ee d7 ae 32 ba c5 08 06
	[Dec22 22:42] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 95 cb 2f 8e 91 08 06
	
	
	==> kernel <==
	 22:57:35 up  2:39,  0 user,  load average: 0.77, 0.34, 0.57
	Linux functional-384766 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 22:57:32 functional-384766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 22:57:33 functional-384766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 823.
	Dec 22 22:57:33 functional-384766 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 22:57:33 functional-384766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 22:57:33 functional-384766 kubelet[17225]: E1222 22:57:33.286472   17225 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 22:57:33 functional-384766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 22:57:33 functional-384766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 22:57:33 functional-384766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 824.
	Dec 22 22:57:33 functional-384766 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 22:57:33 functional-384766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 22:57:34 functional-384766 kubelet[17352]: E1222 22:57:34.054675   17352 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 22:57:34 functional-384766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 22:57:34 functional-384766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 22:57:34 functional-384766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 825.
	Dec 22 22:57:34 functional-384766 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 22:57:34 functional-384766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 22:57:34 functional-384766 kubelet[17390]: E1222 22:57:34.756859   17390 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 22:57:34 functional-384766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 22:57:34 functional-384766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 22:57:35 functional-384766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 826.
	Dec 22 22:57:35 functional-384766 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 22:57:35 functional-384766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 22:57:35 functional-384766 kubelet[17512]: E1222 22:57:35.538242   17512 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 22:57:35 functional-384766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 22:57:35 functional-384766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-384766 -n functional-384766
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-384766 -n functional-384766: exit status 2 (299.327545ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-384766" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly (1.85s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig (731.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-384766 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1222 23:00:31.033753   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/addons-268945/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:01:30.668953   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-580825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:02:53.713798   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-580825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:05:31.032890   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/addons-268945/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:06:30.668954   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-580825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-384766 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 109 (12m9.59594563s)

                                                
                                                
-- stdout --
	* [functional-384766] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22301
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22301-72233/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-72233/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-384766" primary control-plane node in "functional-384766" cluster
	* Pulling base image v0.0.48-1766394456-22288 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000408574s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000888774s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000888774s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:774: failed to restart minikube. args "out/minikube-linux-amd64 start -p functional-384766 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 109
functional_test.go:776: restart took 12m9.598268307s for "functional-384766" cluster.
I1222 23:09:45.795463   75803 config.go:182] Loaded profile config "functional-384766": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-384766
helpers_test.go:244: (dbg) docker inspect functional-384766:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c",
	        "Created": "2025-12-22T22:43:03.818900502Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 134904,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T22:43:03.847527913Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9a87e850a5e640dd3e5f71477885272b970ba271e3722be8bebbe0157f704ffd",
	        "ResolvConfPath": "/var/lib/docker/containers/e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c/hostname",
	        "HostsPath": "/var/lib/docker/containers/e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c/hosts",
	        "LogPath": "/var/lib/docker/containers/e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c/e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c-json.log",
	        "Name": "/functional-384766",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-384766:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-384766",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c",
	                "LowerDir": "/var/lib/docker/overlay2/3e3d10c0ae87018d46767d6a2bb62611a8b9a288f6938e75c60f3cd57119d4bf-init/diff:/var/lib/docker/overlay2/c57dd1a41102d99c4ed6be3c60b871435428bd2cea6a3d8d172f0a67527ba009/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3e3d10c0ae87018d46767d6a2bb62611a8b9a288f6938e75c60f3cd57119d4bf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3e3d10c0ae87018d46767d6a2bb62611a8b9a288f6938e75c60f3cd57119d4bf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3e3d10c0ae87018d46767d6a2bb62611a8b9a288f6938e75c60f3cd57119d4bf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-384766",
	                "Source": "/var/lib/docker/volumes/functional-384766/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-384766",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-384766",
	                "name.minikube.sigs.k8s.io": "functional-384766",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d6f65d275ad1e1cfaea153f23b0c094464e089c27de9a12387045fa2c863e00e",
	            "SandboxKey": "/var/run/docker/netns/d6f65d275ad1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-384766": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1b177601c4f3a252e4feb1553da3a4110e40d5b9ed2bd5de6789f2bc9f8f5c2b",
	                    "EndpointID": "2c787f98c5d836612c102f7592dc2eccfef09327c2a6cadf1319fd6559b5eca8",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "d6:90:04:78:9b:e3",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-384766",
	                        "e126b999cc06"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-384766 -n functional-384766
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-384766 -n functional-384766: exit status 2 (307.153763ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-580825 image ls --format json --alsologtostderr                                                                                        │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image   │ functional-580825 image ls --format short --alsologtostderr                                                                                       │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │                     │
	│ image   │ functional-580825 image ls --format yaml --alsologtostderr                                                                                        │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image   │ functional-580825 image ls --format table --alsologtostderr                                                                                       │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image   │ functional-580825 image build -t localhost/my-image:functional-580825 testdata/build --alsologtostderr                                            │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image   │ functional-580825 image ls                                                                                                                        │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ delete  │ -p functional-580825                                                                                                                              │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ start   │ -p functional-384766 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-rc.1 │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │                     │
	│ start   │ -p functional-384766 --alsologtostderr -v=8                                                                                                       │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 22:51 UTC │                     │
	│ cache   │ functional-384766 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 22:57 UTC │ 22 Dec 25 22:57 UTC │
	│ cache   │ functional-384766 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 22:57 UTC │ 22 Dec 25 22:57 UTC │
	│ cache   │ functional-384766 cache add registry.k8s.io/pause:latest                                                                                          │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 22:57 UTC │ 22 Dec 25 22:57 UTC │
	│ cache   │ functional-384766 cache add minikube-local-cache-test:functional-384766                                                                           │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 22:57 UTC │ 22 Dec 25 22:57 UTC │
	│ cache   │ functional-384766 cache delete minikube-local-cache-test:functional-384766                                                                        │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 22:57 UTC │ 22 Dec 25 22:57 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 22 Dec 25 22:57 UTC │ 22 Dec 25 22:57 UTC │
	│ cache   │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 22 Dec 25 22:57 UTC │ 22 Dec 25 22:57 UTC │
	│ ssh     │ functional-384766 ssh sudo crictl images                                                                                                          │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 22:57 UTC │ 22 Dec 25 22:57 UTC │
	│ ssh     │ functional-384766 ssh sudo docker rmi registry.k8s.io/pause:latest                                                                                │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 22:57 UTC │ 22 Dec 25 22:57 UTC │
	│ ssh     │ functional-384766 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 22:57 UTC │                     │
	│ cache   │ functional-384766 cache reload                                                                                                                    │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 22:57 UTC │ 22 Dec 25 22:57 UTC │
	│ ssh     │ functional-384766 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 22:57 UTC │ 22 Dec 25 22:57 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 22 Dec 25 22:57 UTC │ 22 Dec 25 22:57 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 22 Dec 25 22:57 UTC │ 22 Dec 25 22:57 UTC │
	│ kubectl │ functional-384766 kubectl -- --context functional-384766 get pods                                                                                 │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 22:57 UTC │                     │
	│ start   │ -p functional-384766 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                          │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 22:57 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 22:57:36
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 22:57:36.254392  158374 out.go:360] Setting OutFile to fd 1 ...
	I1222 22:57:36.254700  158374 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 22:57:36.254705  158374 out.go:374] Setting ErrFile to fd 2...
	I1222 22:57:36.254708  158374 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 22:57:36.254883  158374 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-72233/.minikube/bin
	I1222 22:57:36.255420  158374 out.go:368] Setting JSON to false
	I1222 22:57:36.256374  158374 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":9596,"bootTime":1766434660,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1222 22:57:36.256458  158374 start.go:143] virtualization: kvm guest
	I1222 22:57:36.258393  158374 out.go:179] * [functional-384766] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1222 22:57:36.259562  158374 out.go:179]   - MINIKUBE_LOCATION=22301
	I1222 22:57:36.259584  158374 notify.go:221] Checking for updates...
	I1222 22:57:36.261710  158374 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 22:57:36.262944  158374 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1222 22:57:36.264212  158374 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-72233/.minikube
	I1222 22:57:36.265355  158374 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1222 22:57:36.266271  158374 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 22:57:36.267661  158374 config.go:182] Loaded profile config "functional-384766": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1222 22:57:36.267820  158374 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 22:57:36.296187  158374 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1222 22:57:36.296285  158374 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 22:57:36.350829  158374 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:57 SystemTime:2025-12-22 22:57:36.341376778 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1222 22:57:36.350930  158374 docker.go:319] overlay module found
	I1222 22:57:36.352570  158374 out.go:179] * Using the docker driver based on existing profile
	I1222 22:57:36.353588  158374 start.go:309] selected driver: docker
	I1222 22:57:36.353611  158374 start.go:928] validating driver "docker" against &{Name:functional-384766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-384766 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 22:57:36.353719  158374 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 22:57:36.353830  158374 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 22:57:36.406492  158374 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:57 SystemTime:2025-12-22 22:57:36.397760538 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1222 22:57:36.407140  158374 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1222 22:57:36.407175  158374 cni.go:84] Creating CNI manager for ""
	I1222 22:57:36.407232  158374 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1222 22:57:36.407286  158374 start.go:353] cluster config:
	{Name:functional-384766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-384766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 22:57:36.408996  158374 out.go:179] * Starting "functional-384766" primary control-plane node in "functional-384766" cluster
	I1222 22:57:36.410078  158374 cache.go:134] Beginning downloading kic base image for docker with docker
	I1222 22:57:36.411119  158374 out.go:179] * Pulling base image v0.0.48-1766394456-22288 ...
	I1222 22:57:36.412129  158374 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1222 22:57:36.412159  158374 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4
	I1222 22:57:36.412174  158374 cache.go:65] Caching tarball of preloaded images
	I1222 22:57:36.412242  158374 preload.go:251] Found /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1222 22:57:36.412248  158374 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on docker
	I1222 22:57:36.412244  158374 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local docker daemon
	I1222 22:57:36.412341  158374 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/config.json ...
	I1222 22:57:36.431941  158374 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local docker daemon, skipping pull
	I1222 22:57:36.431955  158374 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 exists in daemon, skipping load
	I1222 22:57:36.431969  158374 cache.go:243] Successfully downloaded all kic artifacts
	I1222 22:57:36.431996  158374 start.go:360] acquireMachinesLock for functional-384766: {Name:mk956fe60c71d3d96aa218ecf73d6e39f6ab1bf3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 22:57:36.432059  158374 start.go:364] duration metric: took 40.356µs to acquireMachinesLock for "functional-384766"
	I1222 22:57:36.432072  158374 start.go:96] Skipping create...Using existing machine configuration
	I1222 22:57:36.432076  158374 fix.go:54] fixHost starting: 
	I1222 22:57:36.432265  158374 cli_runner.go:164] Run: docker container inspect functional-384766 --format={{.State.Status}}
	I1222 22:57:36.449079  158374 fix.go:112] recreateIfNeeded on functional-384766: state=Running err=<nil>
	W1222 22:57:36.449100  158374 fix.go:138] unexpected machine state, will restart: <nil>
	I1222 22:57:36.450671  158374 out.go:252] * Updating the running docker "functional-384766" container ...
	I1222 22:57:36.450705  158374 machine.go:94] provisionDockerMachine start ...
	I1222 22:57:36.450764  158374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:57:36.467607  158374 main.go:144] libmachine: Using SSH client type: native
	I1222 22:57:36.467835  158374 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:57:36.467841  158374 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 22:57:36.608433  158374 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-384766
	
	I1222 22:57:36.608449  158374 ubuntu.go:182] provisioning hostname "functional-384766"
	I1222 22:57:36.608504  158374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:57:36.626300  158374 main.go:144] libmachine: Using SSH client type: native
	I1222 22:57:36.626509  158374 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:57:36.626516  158374 main.go:144] libmachine: About to run SSH command:
	sudo hostname functional-384766 && echo "functional-384766" | sudo tee /etc/hostname
	I1222 22:57:36.777413  158374 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-384766
	
	I1222 22:57:36.777486  158374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:57:36.795160  158374 main.go:144] libmachine: Using SSH client type: native
	I1222 22:57:36.795380  158374 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:57:36.795396  158374 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-384766' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-384766/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-384766' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 22:57:36.935922  158374 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 22:57:36.935942  158374 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22301-72233/.minikube CaCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22301-72233/.minikube}
	I1222 22:57:36.935957  158374 ubuntu.go:190] setting up certificates
	I1222 22:57:36.935965  158374 provision.go:84] configureAuth start
	I1222 22:57:36.936023  158374 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-384766
	I1222 22:57:36.954219  158374 provision.go:143] copyHostCerts
	I1222 22:57:36.954277  158374 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem, removing ...
	I1222 22:57:36.954291  158374 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem
	I1222 22:57:36.954367  158374 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem (1082 bytes)
	I1222 22:57:36.954466  158374 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem, removing ...
	I1222 22:57:36.954469  158374 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem
	I1222 22:57:36.954495  158374 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem (1123 bytes)
	I1222 22:57:36.954569  158374 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem, removing ...
	I1222 22:57:36.954572  158374 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem
	I1222 22:57:36.954631  158374 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem (1679 bytes)
	I1222 22:57:36.954687  158374 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem org=jenkins.functional-384766 san=[127.0.0.1 192.168.49.2 functional-384766 localhost minikube]
	I1222 22:57:36.981147  158374 provision.go:177] copyRemoteCerts
	I1222 22:57:36.981202  158374 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 22:57:36.981239  158374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:57:37.000716  158374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
	I1222 22:57:37.101499  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 22:57:37.118740  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 22:57:37.135018  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1222 22:57:37.151214  158374 provision.go:87] duration metric: took 215.234679ms to configureAuth
	I1222 22:57:37.151234  158374 ubuntu.go:206] setting minikube options for container-runtime
	I1222 22:57:37.151390  158374 config.go:182] Loaded profile config "functional-384766": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1222 22:57:37.151430  158374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:57:37.168491  158374 main.go:144] libmachine: Using SSH client type: native
	I1222 22:57:37.168730  158374 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:57:37.168737  158374 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1222 22:57:37.310361  158374 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1222 22:57:37.310376  158374 ubuntu.go:71] root file system type: overlay
	I1222 22:57:37.310489  158374 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1222 22:57:37.310547  158374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:57:37.329095  158374 main.go:144] libmachine: Using SSH client type: native
	I1222 22:57:37.329306  158374 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:57:37.329369  158374 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1222 22:57:37.478917  158374 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1222 22:57:37.478994  158374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:57:37.496454  158374 main.go:144] libmachine: Using SSH client type: native
	I1222 22:57:37.496687  158374 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:57:37.496699  158374 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1222 22:57:37.641628  158374 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 22:57:37.641654  158374 machine.go:97] duration metric: took 1.190941144s to provisionDockerMachine
	I1222 22:57:37.641665  158374 start.go:293] postStartSetup for "functional-384766" (driver="docker")
	I1222 22:57:37.641676  158374 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 22:57:37.641727  158374 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 22:57:37.641757  158374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:57:37.659069  158374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
	I1222 22:57:37.759899  158374 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 22:57:37.763912  158374 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 22:57:37.763929  158374 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 22:57:37.763939  158374 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-72233/.minikube/addons for local assets ...
	I1222 22:57:37.763985  158374 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-72233/.minikube/files for local assets ...
	I1222 22:57:37.764057  158374 filesync.go:149] local asset: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem -> 758032.pem in /etc/ssl/certs
	I1222 22:57:37.764125  158374 filesync.go:149] local asset: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/test/nested/copy/75803/hosts -> hosts in /etc/test/nested/copy/75803
	I1222 22:57:37.764158  158374 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/75803
	I1222 22:57:37.772288  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem --> /etc/ssl/certs/758032.pem (1708 bytes)
	I1222 22:57:37.789657  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/test/nested/copy/75803/hosts --> /etc/test/nested/copy/75803/hosts (40 bytes)
	I1222 22:57:37.805946  158374 start.go:296] duration metric: took 164.267669ms for postStartSetup
	I1222 22:57:37.806019  158374 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 22:57:37.806054  158374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:57:37.823397  158374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
	I1222 22:57:37.920964  158374 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 22:57:37.925567  158374 fix.go:56] duration metric: took 1.493483875s for fixHost
	I1222 22:57:37.925585  158374 start.go:83] releasing machines lock for "functional-384766", held for 1.493518865s
	I1222 22:57:37.925676  158374 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-384766
	I1222 22:57:37.944340  158374 ssh_runner.go:195] Run: cat /version.json
	I1222 22:57:37.944379  158374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:57:37.944410  158374 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 22:57:37.944475  158374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:57:37.962270  158374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
	I1222 22:57:37.963480  158374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
	I1222 22:57:38.111745  158374 ssh_runner.go:195] Run: systemctl --version
	I1222 22:57:38.118245  158374 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 22:57:38.122628  158374 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 22:57:38.122679  158374 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 22:57:38.130349  158374 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1222 22:57:38.130362  158374 start.go:496] detecting cgroup driver to use...
	I1222 22:57:38.130390  158374 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 22:57:38.130482  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 22:57:38.143844  158374 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1222 22:57:38.152204  158374 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1222 22:57:38.160833  158374 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1222 22:57:38.160878  158374 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1222 22:57:38.168944  158374 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 22:57:38.176827  158374 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1222 22:57:38.185035  158374 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 22:57:38.193068  158374 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 22:57:38.200733  158374 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1222 22:57:38.208877  158374 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1222 22:57:38.217062  158374 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1222 22:57:38.225212  158374 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 22:57:38.231954  158374 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 22:57:38.238562  158374 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 22:57:38.319900  158374 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1222 22:57:38.394735  158374 start.go:496] detecting cgroup driver to use...
	I1222 22:57:38.394777  158374 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 22:57:38.394829  158374 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1222 22:57:38.408181  158374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 22:57:38.420724  158374 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1222 22:57:38.437862  158374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 22:57:38.450387  158374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1222 22:57:38.462197  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 22:57:38.475419  158374 ssh_runner.go:195] Run: which cri-dockerd
	I1222 22:57:38.478805  158374 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1222 22:57:38.485878  158374 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1222 22:57:38.497638  158374 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1222 22:57:38.579501  158374 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1222 22:57:38.662636  158374 docker.go:578] configuring docker to use "cgroupfs" as cgroup driver...
	I1222 22:57:38.662750  158374 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1222 22:57:38.675412  158374 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1222 22:57:38.686668  158374 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 22:57:38.767093  158374 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1222 22:57:39.452892  158374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 22:57:39.465276  158374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1222 22:57:39.477001  158374 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1222 22:57:39.491722  158374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1222 22:57:39.503501  158374 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1222 22:57:39.584904  158374 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1222 22:57:39.672762  158374 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 22:57:39.748726  158374 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1222 22:57:39.768653  158374 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1222 22:57:39.780104  158374 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 22:57:39.862790  158374 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1222 22:57:39.934384  158374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1222 22:57:39.948030  158374 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1222 22:57:39.948084  158374 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1222 22:57:39.952002  158374 start.go:564] Will wait 60s for crictl version
	I1222 22:57:39.952049  158374 ssh_runner.go:195] Run: which crictl
	I1222 22:57:39.955397  158374 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 22:57:39.979213  158374 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1222 22:57:39.979270  158374 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1222 22:57:40.004367  158374 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1222 22:57:40.031792  158374 out.go:252] * Preparing Kubernetes v1.35.0-rc.1 on Docker 29.1.3 ...
	I1222 22:57:40.031863  158374 cli_runner.go:164] Run: docker network inspect functional-384766 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 22:57:40.047933  158374 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1222 22:57:40.053698  158374 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1222 22:57:40.054726  158374 kubeadm.go:884] updating cluster {Name:functional-384766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-384766 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1222 22:57:40.054846  158374 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1222 22:57:40.054890  158374 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1222 22:57:40.076020  158374 docker.go:694] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-384766
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1222 22:57:40.076046  158374 docker.go:624] Images already preloaded, skipping extraction
	I1222 22:57:40.076111  158374 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1222 22:57:40.096347  158374 docker.go:694] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-384766
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1222 22:57:40.096366  158374 cache_images.go:86] Images are preloaded, skipping loading
	I1222 22:57:40.096374  158374 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 docker true true} ...
	I1222 22:57:40.096468  158374 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-384766 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-384766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 22:57:40.096517  158374 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1222 22:57:40.147179  158374 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1222 22:57:40.147206  158374 cni.go:84] Creating CNI manager for ""
	I1222 22:57:40.147226  158374 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1222 22:57:40.147236  158374 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 22:57:40.147256  158374 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-384766 NodeName:functional-384766 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 22:57:40.147375  158374 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-384766"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 22:57:40.147436  158374 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 22:57:40.155394  158374 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 22:57:40.155439  158374 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 22:57:40.163036  158374 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1222 22:57:40.175169  158374 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 22:57:40.187093  158374 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2073 bytes)
	I1222 22:57:40.198818  158374 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1222 22:57:40.202222  158374 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 22:57:40.283126  158374 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 22:57:40.747886  158374 certs.go:69] Setting up /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766 for IP: 192.168.49.2
	I1222 22:57:40.747899  158374 certs.go:195] generating shared ca certs ...
	I1222 22:57:40.747914  158374 certs.go:227] acquiring lock for ca certs: {Name:mk952cc8302daab7c0050aedd5db4002f6808128 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 22:57:40.748072  158374 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.key
	I1222 22:57:40.748113  158374 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.key
	I1222 22:57:40.748119  158374 certs.go:257] generating profile certs ...
	I1222 22:57:40.748199  158374 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.key
	I1222 22:57:40.748236  158374 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/apiserver.key.c9e079a8
	I1222 22:57:40.748278  158374 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/proxy-client.key
	I1222 22:57:40.748397  158374 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803.pem (1338 bytes)
	W1222 22:57:40.748423  158374 certs.go:480] ignoring /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803_empty.pem, impossibly tiny 0 bytes
	I1222 22:57:40.748429  158374 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem (1675 bytes)
	I1222 22:57:40.748451  158374 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem (1082 bytes)
	I1222 22:57:40.748470  158374 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem (1123 bytes)
	I1222 22:57:40.748489  158374 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem (1679 bytes)
	I1222 22:57:40.748525  158374 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem (1708 bytes)
	I1222 22:57:40.749053  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 22:57:40.768237  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 22:57:40.787559  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 22:57:40.804276  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 22:57:40.820613  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 22:57:40.836790  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 22:57:40.852839  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 22:57:40.869050  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1222 22:57:40.885231  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803.pem --> /usr/share/ca-certificates/75803.pem (1338 bytes)
	I1222 22:57:40.901347  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem --> /usr/share/ca-certificates/758032.pem (1708 bytes)
	I1222 22:57:40.917338  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 22:57:40.933332  158374 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 22:57:40.944903  158374 ssh_runner.go:195] Run: openssl version
	I1222 22:57:40.950515  158374 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/758032.pem
	I1222 22:57:40.957071  158374 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/758032.pem /etc/ssl/certs/758032.pem
	I1222 22:57:40.963749  158374 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/758032.pem
	I1222 22:57:40.966999  158374 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 22:42 /usr/share/ca-certificates/758032.pem
	I1222 22:57:40.967032  158374 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/758032.pem
	I1222 22:57:41.000342  158374 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 22:57:41.007579  158374 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 22:57:41.014450  158374 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 22:57:41.021442  158374 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 22:57:41.024853  158374 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 22:33 /usr/share/ca-certificates/minikubeCA.pem
	I1222 22:57:41.024902  158374 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 22:57:41.058138  158374 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 22:57:41.065135  158374 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/75803.pem
	I1222 22:57:41.071858  158374 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/75803.pem /etc/ssl/certs/75803.pem
	I1222 22:57:41.078672  158374 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75803.pem
	I1222 22:57:41.082051  158374 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 22:42 /usr/share/ca-certificates/75803.pem
	I1222 22:57:41.082083  158374 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75803.pem
	I1222 22:57:41.115012  158374 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 22:57:41.122326  158374 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 22:57:41.125872  158374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1222 22:57:41.158840  158374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1222 22:57:41.191689  158374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1222 22:57:41.224669  158374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1222 22:57:41.258802  158374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1222 22:57:41.292531  158374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1222 22:57:41.327828  158374 kubeadm.go:401] StartCluster: {Name:functional-384766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-384766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 22:57:41.327941  158374 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1222 22:57:41.347229  158374 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 22:57:41.355058  158374 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1222 22:57:41.355067  158374 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1222 22:57:41.355102  158374 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1222 22:57:41.362198  158374 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1222 22:57:41.362672  158374 kubeconfig.go:125] found "functional-384766" server: "https://192.168.49.2:8441"
	I1222 22:57:41.363809  158374 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1222 22:57:41.371022  158374 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-22 22:43:13.034628184 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-22 22:57:40.197478933 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1222 22:57:41.371029  158374 kubeadm.go:1161] stopping kube-system containers ...
	I1222 22:57:41.371066  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1222 22:57:41.389715  158374 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1222 22:57:41.415695  158374 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 22:57:41.423304  158374 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec 22 22:47 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 22 22:47 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 22 22:47 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 22 22:47 /etc/kubernetes/scheduler.conf
	
	I1222 22:57:41.423364  158374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1222 22:57:41.430717  158374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1222 22:57:41.437811  158374 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1222 22:57:41.437848  158374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 22:57:41.444879  158374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1222 22:57:41.452191  158374 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1222 22:57:41.452233  158374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 22:57:41.459225  158374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1222 22:57:41.466383  158374 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1222 22:57:41.466418  158374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 22:57:41.473427  158374 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 22:57:41.480724  158374 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1222 22:57:41.518575  158374 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1222 22:57:41.974225  158374 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1222 22:57:42.135961  158374 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1222 22:57:42.183844  158374 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1222 22:57:42.223254  158374 api_server.go:52] waiting for apiserver process to appear ...
	I1222 22:57:42.223318  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:42.723474  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:43.223549  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:43.724244  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:44.223849  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:44.724026  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:45.223499  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:45.723832  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:46.223744  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:46.723529  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:47.224208  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:47.723932  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:48.223584  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:48.724285  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:49.224200  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:49.723863  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:50.223734  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:50.724424  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:51.224429  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:51.724246  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:52.223705  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:52.724265  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:53.223569  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:53.724236  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:54.224306  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:54.724058  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:55.223766  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:55.724443  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:56.224475  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:56.724242  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:57.223801  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:57.723648  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:58.224456  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:58.724111  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:59.224088  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:59.723871  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:00.223787  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:00.723546  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:01.224166  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:01.723833  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:02.224349  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:02.724090  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:03.223629  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:03.724404  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:04.223783  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:04.724330  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:05.223503  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:05.724088  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:06.224003  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:06.724375  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:07.223508  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:07.724225  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:08.224384  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:08.724300  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:09.224073  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:09.723806  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:10.223613  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:10.724450  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:11.224438  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:11.724384  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:12.224307  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:12.723407  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:13.224265  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:13.724010  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:14.223745  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:14.723548  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:15.223894  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:15.723495  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:16.223471  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:16.724428  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:17.224173  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:17.723800  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:18.223536  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:18.724376  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:19.224018  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:19.723833  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:20.223461  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:20.723797  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:21.223581  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:21.723470  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:22.224221  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:22.723735  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:23.223505  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:23.723726  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:24.223516  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:24.724413  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:25.223835  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:25.723691  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:26.223672  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:26.723588  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:27.223568  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:27.723458  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:28.224226  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:28.724079  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:29.223830  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:29.723697  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:30.224456  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:30.724136  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:31.223750  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:31.723578  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:32.223414  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:32.724025  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:33.224291  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:33.724443  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:34.224315  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:34.724019  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:35.223687  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:35.723472  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:36.224212  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:36.724077  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:37.223750  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:37.723438  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:38.223515  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:38.723503  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:39.224337  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:39.723503  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:40.224133  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:40.723853  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:41.223668  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:41.723695  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:42.223527  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:58:42.242498  158374 logs.go:282] 0 containers: []
	W1222 22:58:42.242530  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:58:42.242576  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:58:42.263682  158374 logs.go:282] 0 containers: []
	W1222 22:58:42.263696  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:58:42.263747  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:58:42.284235  158374 logs.go:282] 0 containers: []
	W1222 22:58:42.284250  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:58:42.284330  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:58:42.303204  158374 logs.go:282] 0 containers: []
	W1222 22:58:42.303219  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:58:42.303263  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:58:42.321387  158374 logs.go:282] 0 containers: []
	W1222 22:58:42.321404  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:58:42.321461  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:58:42.340277  158374 logs.go:282] 0 containers: []
	W1222 22:58:42.340290  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:58:42.340333  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:58:42.359009  158374 logs.go:282] 0 containers: []
	W1222 22:58:42.359025  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:58:42.359034  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:58:42.359044  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:58:42.407304  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:58:42.407323  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:58:42.423167  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:58:42.423184  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:58:42.478018  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:58:42.471043   19939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:42.471587   19939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:42.473144   19939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:42.473549   19939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:42.474977   19939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:58:42.471043   19939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:42.471587   19939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:42.473144   19939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:42.473549   19939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:42.474977   19939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:58:42.478032  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:58:42.478050  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:58:42.508140  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:58:42.508159  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:58:45.047948  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:45.058851  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:58:45.078438  158374 logs.go:282] 0 containers: []
	W1222 22:58:45.078457  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:58:45.078506  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:58:45.096664  158374 logs.go:282] 0 containers: []
	W1222 22:58:45.096678  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:58:45.096729  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:58:45.114982  158374 logs.go:282] 0 containers: []
	W1222 22:58:45.114995  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:58:45.115033  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:58:45.132907  158374 logs.go:282] 0 containers: []
	W1222 22:58:45.132920  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:58:45.132960  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:58:45.151352  158374 logs.go:282] 0 containers: []
	W1222 22:58:45.151368  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:58:45.151409  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:58:45.169708  158374 logs.go:282] 0 containers: []
	W1222 22:58:45.169725  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:58:45.169767  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:58:45.187775  158374 logs.go:282] 0 containers: []
	W1222 22:58:45.187790  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:58:45.187802  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:58:45.187814  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:58:45.242776  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:58:45.235974   20088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:45.236524   20088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:45.238023   20088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:45.238438   20088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:45.239937   20088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:58:45.235974   20088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:45.236524   20088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:45.238023   20088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:45.238438   20088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:45.239937   20088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:58:45.242790  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:58:45.242800  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:58:45.273873  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:58:45.273892  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:58:45.303522  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:58:45.303541  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:58:45.351682  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:58:45.351702  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:58:47.869586  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:47.880760  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:58:47.899543  158374 logs.go:282] 0 containers: []
	W1222 22:58:47.899560  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:58:47.899617  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:58:47.917954  158374 logs.go:282] 0 containers: []
	W1222 22:58:47.917970  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:58:47.918017  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:58:47.936207  158374 logs.go:282] 0 containers: []
	W1222 22:58:47.936224  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:58:47.936269  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:58:47.954310  158374 logs.go:282] 0 containers: []
	W1222 22:58:47.954328  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:58:47.954376  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:58:47.971746  158374 logs.go:282] 0 containers: []
	W1222 22:58:47.971762  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:58:47.971806  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:58:47.989993  158374 logs.go:282] 0 containers: []
	W1222 22:58:47.990008  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:58:47.990054  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:58:48.008188  158374 logs.go:282] 0 containers: []
	W1222 22:58:48.008204  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:58:48.008215  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:58:48.008227  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:58:48.056174  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:58:48.056192  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:58:48.071128  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:58:48.071143  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:58:48.124584  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:58:48.117861   20249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:48.118352   20249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:48.119971   20249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:48.120348   20249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:48.121842   20249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:58:48.117861   20249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:48.118352   20249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:48.119971   20249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:48.120348   20249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:48.121842   20249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:58:48.124621  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:58:48.124635  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:58:48.155889  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:58:48.155907  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:58:50.685742  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:50.696961  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:58:50.716371  158374 logs.go:282] 0 containers: []
	W1222 22:58:50.716385  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:58:50.716430  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:58:50.734780  158374 logs.go:282] 0 containers: []
	W1222 22:58:50.734798  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:58:50.734842  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:58:50.753152  158374 logs.go:282] 0 containers: []
	W1222 22:58:50.753169  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:58:50.753213  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:58:50.771281  158374 logs.go:282] 0 containers: []
	W1222 22:58:50.771296  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:58:50.771338  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:58:50.788814  158374 logs.go:282] 0 containers: []
	W1222 22:58:50.788826  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:58:50.788872  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:58:50.806768  158374 logs.go:282] 0 containers: []
	W1222 22:58:50.806781  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:58:50.806837  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:58:50.824539  158374 logs.go:282] 0 containers: []
	W1222 22:58:50.824552  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:58:50.824561  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:58:50.824581  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:58:50.873346  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:58:50.873363  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:58:50.888174  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:58:50.888188  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:58:50.942890  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:58:50.935978   20403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:50.936526   20403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:50.938077   20403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:50.938499   20403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:50.939984   20403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:58:50.935978   20403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:50.936526   20403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:50.938077   20403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:50.938499   20403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:50.939984   20403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:58:50.942904  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:58:50.942915  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:58:50.971205  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:58:50.971223  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:58:53.500770  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:53.512538  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:58:53.535794  158374 logs.go:282] 0 containers: []
	W1222 22:58:53.535812  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:58:53.535872  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:58:53.554667  158374 logs.go:282] 0 containers: []
	W1222 22:58:53.554684  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:58:53.554739  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:58:53.573251  158374 logs.go:282] 0 containers: []
	W1222 22:58:53.573267  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:58:53.573317  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:58:53.591664  158374 logs.go:282] 0 containers: []
	W1222 22:58:53.591686  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:58:53.591739  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:58:53.610128  158374 logs.go:282] 0 containers: []
	W1222 22:58:53.610141  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:58:53.610183  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:58:53.628089  158374 logs.go:282] 0 containers: []
	W1222 22:58:53.628105  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:58:53.628148  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:58:53.645890  158374 logs.go:282] 0 containers: []
	W1222 22:58:53.645908  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:58:53.645919  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:58:53.645932  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:58:53.692043  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:58:53.692062  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:58:53.707092  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:58:53.707107  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:58:53.761308  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:58:53.754291   20560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:53.754841   20560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:53.756416   20560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:53.756807   20560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:53.758250   20560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:58:53.754291   20560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:53.754841   20560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:53.756416   20560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:53.756807   20560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:53.758250   20560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:58:53.761320  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:58:53.761331  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:58:53.789713  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:58:53.789730  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:58:56.318819  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:56.329790  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:58:56.348795  158374 logs.go:282] 0 containers: []
	W1222 22:58:56.348808  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:58:56.348851  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:58:56.366850  158374 logs.go:282] 0 containers: []
	W1222 22:58:56.366866  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:58:56.366932  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:58:56.385468  158374 logs.go:282] 0 containers: []
	W1222 22:58:56.385483  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:58:56.385530  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:58:56.404330  158374 logs.go:282] 0 containers: []
	W1222 22:58:56.404345  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:58:56.404406  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:58:56.422533  158374 logs.go:282] 0 containers: []
	W1222 22:58:56.422549  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:58:56.422631  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:58:56.440667  158374 logs.go:282] 0 containers: []
	W1222 22:58:56.440681  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:58:56.440742  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:58:56.459073  158374 logs.go:282] 0 containers: []
	W1222 22:58:56.459088  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:58:56.459099  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:58:56.459113  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:58:56.506766  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:58:56.506783  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:58:56.523645  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:58:56.523667  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:58:56.580517  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:58:56.573333   20718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:56.573875   20718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:56.575514   20718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:56.576017   20718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:56.577576   20718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:58:56.573333   20718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:56.573875   20718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:56.575514   20718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:56.576017   20718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:56.577576   20718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:58:56.580531  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:58:56.580543  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:58:56.610571  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:58:56.610588  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:58:59.140001  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:59.151059  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:58:59.169787  158374 logs.go:282] 0 containers: []
	W1222 22:58:59.169801  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:58:59.169840  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:58:59.187907  158374 logs.go:282] 0 containers: []
	W1222 22:58:59.187919  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:58:59.187959  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:58:59.206755  158374 logs.go:282] 0 containers: []
	W1222 22:58:59.206770  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:58:59.206811  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:58:59.225123  158374 logs.go:282] 0 containers: []
	W1222 22:58:59.225139  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:58:59.225179  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:58:59.243400  158374 logs.go:282] 0 containers: []
	W1222 22:58:59.243414  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:58:59.243453  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:58:59.261475  158374 logs.go:282] 0 containers: []
	W1222 22:58:59.261492  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:58:59.261556  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:58:59.279819  158374 logs.go:282] 0 containers: []
	W1222 22:58:59.279834  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:58:59.279844  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:58:59.279855  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:58:59.295024  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:58:59.295046  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:58:59.349874  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:58:59.343012   20862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:59.343571   20862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:59.345099   20862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:59.345548   20862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:59.347033   20862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:58:59.343012   20862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:59.343571   20862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:59.345099   20862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:59.345548   20862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:59.347033   20862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:58:59.349889  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:58:59.349902  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:58:59.381356  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:58:59.381378  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:58:59.409144  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:58:59.409160  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:01.955340  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:01.966166  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:01.985075  158374 logs.go:282] 0 containers: []
	W1222 22:59:01.985088  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:01.985135  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:02.003681  158374 logs.go:282] 0 containers: []
	W1222 22:59:02.003695  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:02.003748  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:02.022064  158374 logs.go:282] 0 containers: []
	W1222 22:59:02.022081  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:02.022127  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:02.040290  158374 logs.go:282] 0 containers: []
	W1222 22:59:02.040302  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:02.040346  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:02.058109  158374 logs.go:282] 0 containers: []
	W1222 22:59:02.058123  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:02.058167  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:02.076398  158374 logs.go:282] 0 containers: []
	W1222 22:59:02.076415  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:02.076469  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:02.095264  158374 logs.go:282] 0 containers: []
	W1222 22:59:02.095326  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:02.095338  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:02.095350  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:02.140655  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:02.140678  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:02.156234  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:02.156248  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:02.212079  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:02.205182   21023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:02.205716   21023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:02.207280   21023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:02.207782   21023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:02.209314   21023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:02.205182   21023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:02.205716   21023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:02.207280   21023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:02.207782   21023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:02.209314   21023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:02.212094  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:02.212106  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:02.241399  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:02.241415  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:04.771709  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:04.783605  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:04.802797  158374 logs.go:282] 0 containers: []
	W1222 22:59:04.802811  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:04.802907  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:04.822172  158374 logs.go:282] 0 containers: []
	W1222 22:59:04.822187  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:04.822232  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:04.840265  158374 logs.go:282] 0 containers: []
	W1222 22:59:04.840280  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:04.840320  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:04.858270  158374 logs.go:282] 0 containers: []
	W1222 22:59:04.858287  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:04.858329  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:04.876142  158374 logs.go:282] 0 containers: []
	W1222 22:59:04.876158  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:04.876204  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:04.894156  158374 logs.go:282] 0 containers: []
	W1222 22:59:04.894169  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:04.894209  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:04.912355  158374 logs.go:282] 0 containers: []
	W1222 22:59:04.912373  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:04.912383  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:04.912393  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:04.940312  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:04.940332  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:04.985353  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:04.985370  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:05.000242  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:05.000264  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:05.054276  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:05.047325   21197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:05.047883   21197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:05.049424   21197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:05.049887   21197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:05.051401   21197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:05.047325   21197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:05.047883   21197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:05.049424   21197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:05.049887   21197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:05.051401   21197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:05.054288  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:05.054298  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:07.583327  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:07.594487  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:07.614008  158374 logs.go:282] 0 containers: []
	W1222 22:59:07.614023  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:07.614073  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:07.633345  158374 logs.go:282] 0 containers: []
	W1222 22:59:07.633364  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:07.633410  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:07.651888  158374 logs.go:282] 0 containers: []
	W1222 22:59:07.651900  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:07.651939  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:07.670373  158374 logs.go:282] 0 containers: []
	W1222 22:59:07.670389  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:07.670431  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:07.687752  158374 logs.go:282] 0 containers: []
	W1222 22:59:07.687772  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:07.687819  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:07.707382  158374 logs.go:282] 0 containers: []
	W1222 22:59:07.707397  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:07.707449  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:07.725692  158374 logs.go:282] 0 containers: []
	W1222 22:59:07.725705  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:07.725714  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:07.725724  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:07.741276  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:07.741290  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:07.807688  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:07.800646   21327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:07.801223   21327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:07.802769   21327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:07.803177   21327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:07.804708   21327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:07.800646   21327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:07.801223   21327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:07.802769   21327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:07.803177   21327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:07.804708   21327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:07.807698  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:07.807708  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:07.838193  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:07.838211  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:07.867411  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:07.867429  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:10.417278  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:10.428172  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:10.447192  158374 logs.go:282] 0 containers: []
	W1222 22:59:10.447210  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:10.447268  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:10.465742  158374 logs.go:282] 0 containers: []
	W1222 22:59:10.465755  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:10.465802  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:10.483930  158374 logs.go:282] 0 containers: []
	W1222 22:59:10.483943  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:10.483982  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:10.502550  158374 logs.go:282] 0 containers: []
	W1222 22:59:10.502564  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:10.502631  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:10.521157  158374 logs.go:282] 0 containers: []
	W1222 22:59:10.521170  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:10.521217  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:10.539930  158374 logs.go:282] 0 containers: []
	W1222 22:59:10.539944  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:10.539988  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:10.557819  158374 logs.go:282] 0 containers: []
	W1222 22:59:10.557836  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:10.557847  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:10.557860  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:10.586007  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:10.586023  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:10.630906  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:10.630928  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:10.645969  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:10.645986  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:10.700369  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:10.693704   21501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:10.694182   21501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:10.695738   21501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:10.696170   21501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:10.697667   21501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:10.693704   21501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:10.694182   21501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:10.695738   21501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:10.696170   21501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:10.697667   21501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:10.700383  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:10.700396  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:13.229999  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:13.241245  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:13.261612  158374 logs.go:282] 0 containers: []
	W1222 22:59:13.261629  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:13.261685  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:13.279825  158374 logs.go:282] 0 containers: []
	W1222 22:59:13.279843  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:13.279893  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:13.297933  158374 logs.go:282] 0 containers: []
	W1222 22:59:13.297951  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:13.298008  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:13.316218  158374 logs.go:282] 0 containers: []
	W1222 22:59:13.316235  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:13.316315  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:13.334375  158374 logs.go:282] 0 containers: []
	W1222 22:59:13.334389  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:13.334444  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:13.353104  158374 logs.go:282] 0 containers: []
	W1222 22:59:13.353123  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:13.353179  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:13.371772  158374 logs.go:282] 0 containers: []
	W1222 22:59:13.371791  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:13.371802  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:13.371816  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:13.419777  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:13.419800  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:13.435473  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:13.435489  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:13.490824  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:13.484010   21640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:13.484560   21640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:13.486103   21640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:13.486541   21640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:13.488011   21640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:13.484010   21640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:13.484560   21640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:13.486103   21640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:13.486541   21640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:13.488011   21640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:13.490835  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:13.490848  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:13.519782  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:13.519800  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:16.052715  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:16.064085  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:16.083176  158374 logs.go:282] 0 containers: []
	W1222 22:59:16.083195  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:16.083255  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:16.102468  158374 logs.go:282] 0 containers: []
	W1222 22:59:16.102485  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:16.102532  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:16.121564  158374 logs.go:282] 0 containers: []
	W1222 22:59:16.121580  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:16.121654  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:16.140862  158374 logs.go:282] 0 containers: []
	W1222 22:59:16.140879  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:16.140928  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:16.159281  158374 logs.go:282] 0 containers: []
	W1222 22:59:16.159295  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:16.159347  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:16.177569  158374 logs.go:282] 0 containers: []
	W1222 22:59:16.177606  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:16.177659  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:16.196491  158374 logs.go:282] 0 containers: []
	W1222 22:59:16.196507  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:16.196516  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:16.196526  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:16.225379  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:16.225399  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:16.270312  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:16.270332  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:16.285737  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:16.285752  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:16.339892  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:16.332955   21812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:16.333507   21812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:16.335037   21812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:16.335481   21812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:16.337003   21812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:16.332955   21812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:16.333507   21812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:16.335037   21812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:16.335481   21812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:16.337003   21812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:16.339906  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:16.339924  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:18.870402  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:18.881333  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:18.899917  158374 logs.go:282] 0 containers: []
	W1222 22:59:18.899940  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:18.899987  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:18.918652  158374 logs.go:282] 0 containers: []
	W1222 22:59:18.918666  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:18.918711  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:18.936854  158374 logs.go:282] 0 containers: []
	W1222 22:59:18.936871  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:18.936930  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:18.956082  158374 logs.go:282] 0 containers: []
	W1222 22:59:18.956099  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:18.956148  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:18.974672  158374 logs.go:282] 0 containers: []
	W1222 22:59:18.974690  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:18.974747  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:18.993264  158374 logs.go:282] 0 containers: []
	W1222 22:59:18.993281  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:18.993330  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:19.013308  158374 logs.go:282] 0 containers: []
	W1222 22:59:19.013325  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:19.013335  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:19.013346  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:19.063311  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:19.063330  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:19.078990  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:19.079012  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:19.135746  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:19.127970   21956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:19.128563   21956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:19.130146   21956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:19.130556   21956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:19.132525   21956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:19.127970   21956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:19.128563   21956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:19.130146   21956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:19.130556   21956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:19.132525   21956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:19.135757  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:19.135778  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:19.165331  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:19.165348  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:21.694471  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:21.705412  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:21.724588  158374 logs.go:282] 0 containers: []
	W1222 22:59:21.724617  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:21.724663  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:21.744659  158374 logs.go:282] 0 containers: []
	W1222 22:59:21.744677  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:21.744732  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:21.762841  158374 logs.go:282] 0 containers: []
	W1222 22:59:21.762858  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:21.762913  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:21.782008  158374 logs.go:282] 0 containers: []
	W1222 22:59:21.782023  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:21.782064  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:21.801013  158374 logs.go:282] 0 containers: []
	W1222 22:59:21.801031  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:21.801077  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:21.817861  158374 logs.go:282] 0 containers: []
	W1222 22:59:21.817879  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:21.817936  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:21.836076  158374 logs.go:282] 0 containers: []
	W1222 22:59:21.836093  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:21.836104  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:21.836115  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:21.884827  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:21.884849  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:21.900053  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:21.900069  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:21.955238  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:21.947967   22101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:21.948481   22101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:21.950007   22101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:21.950444   22101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:21.952014   22101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:21.947967   22101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:21.948481   22101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:21.950007   22101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:21.950444   22101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:21.952014   22101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:21.955248  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:21.955258  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:21.984138  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:21.984157  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:24.515104  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:24.526883  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:24.546166  158374 logs.go:282] 0 containers: []
	W1222 22:59:24.546180  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:24.546228  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:24.565305  158374 logs.go:282] 0 containers: []
	W1222 22:59:24.565319  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:24.565361  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:24.584559  158374 logs.go:282] 0 containers: []
	W1222 22:59:24.584572  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:24.584631  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:24.604650  158374 logs.go:282] 0 containers: []
	W1222 22:59:24.604664  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:24.604712  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:24.623346  158374 logs.go:282] 0 containers: []
	W1222 22:59:24.623362  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:24.623412  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:24.642324  158374 logs.go:282] 0 containers: []
	W1222 22:59:24.642343  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:24.642406  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:24.661990  158374 logs.go:282] 0 containers: []
	W1222 22:59:24.662004  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:24.662013  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:24.662024  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:24.677840  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:24.677855  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:24.734271  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:24.726969   22258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:24.727498   22258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:24.729051   22258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:24.729473   22258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:24.731041   22258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:24.726969   22258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:24.727498   22258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:24.729051   22258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:24.729473   22258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:24.731041   22258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:24.734289  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:24.734304  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:24.764562  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:24.764580  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:24.793099  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:24.793115  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:27.340497  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:27.351904  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:27.372400  158374 logs.go:282] 0 containers: []
	W1222 22:59:27.372419  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:27.372472  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:27.392295  158374 logs.go:282] 0 containers: []
	W1222 22:59:27.392312  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:27.392363  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:27.411771  158374 logs.go:282] 0 containers: []
	W1222 22:59:27.411784  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:27.411828  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:27.430497  158374 logs.go:282] 0 containers: []
	W1222 22:59:27.430512  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:27.430558  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:27.449983  158374 logs.go:282] 0 containers: []
	W1222 22:59:27.449999  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:27.450044  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:27.469696  158374 logs.go:282] 0 containers: []
	W1222 22:59:27.469714  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:27.469771  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:27.488685  158374 logs.go:282] 0 containers: []
	W1222 22:59:27.488702  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:27.488715  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:27.488730  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:27.517546  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:27.517564  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:27.564530  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:27.564554  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:27.579944  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:27.579963  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:27.636369  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:27.629189   22431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:27.629734   22431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:27.631292   22431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:27.631750   22431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:27.633229   22431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:27.629189   22431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:27.629734   22431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:27.631292   22431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:27.631750   22431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:27.633229   22431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:27.636383  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:27.636394  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:30.168117  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:30.179633  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:30.199078  158374 logs.go:282] 0 containers: []
	W1222 22:59:30.199094  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:30.199144  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:30.218504  158374 logs.go:282] 0 containers: []
	W1222 22:59:30.218517  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:30.218559  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:30.237792  158374 logs.go:282] 0 containers: []
	W1222 22:59:30.237810  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:30.237858  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:30.257058  158374 logs.go:282] 0 containers: []
	W1222 22:59:30.257073  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:30.257118  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:30.277405  158374 logs.go:282] 0 containers: []
	W1222 22:59:30.277422  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:30.277475  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:30.297453  158374 logs.go:282] 0 containers: []
	W1222 22:59:30.297467  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:30.297515  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:30.316894  158374 logs.go:282] 0 containers: []
	W1222 22:59:30.316915  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:30.316924  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:30.316936  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:30.346684  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:30.346705  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:30.376362  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:30.376378  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:30.422918  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:30.422940  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:30.438917  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:30.438935  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:30.494621  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:30.487113   22590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:30.487790   22590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:30.489338   22590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:30.489779   22590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:30.491378   22590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:30.487113   22590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:30.487790   22590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:30.489338   22590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:30.489779   22590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:30.491378   22590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:32.995681  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:33.006896  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:33.026274  158374 logs.go:282] 0 containers: []
	W1222 22:59:33.026292  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:33.026336  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:33.045071  158374 logs.go:282] 0 containers: []
	W1222 22:59:33.045087  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:33.045134  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:33.064583  158374 logs.go:282] 0 containers: []
	W1222 22:59:33.064611  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:33.064660  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:33.085351  158374 logs.go:282] 0 containers: []
	W1222 22:59:33.085374  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:33.085431  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:33.103978  158374 logs.go:282] 0 containers: []
	W1222 22:59:33.103991  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:33.104045  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:33.123168  158374 logs.go:282] 0 containers: []
	W1222 22:59:33.123186  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:33.123241  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:33.143080  158374 logs.go:282] 0 containers: []
	W1222 22:59:33.143095  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:33.143105  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:33.143116  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:33.197825  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:33.190806   22712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:33.191305   22712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:33.192895   22712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:33.193360   22712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:33.194895   22712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:33.190806   22712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:33.191305   22712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:33.192895   22712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:33.193360   22712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:33.194895   22712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:33.197836  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:33.197850  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:33.226457  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:33.226476  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:33.257519  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:33.257546  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:33.309950  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:33.309971  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:35.827217  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:35.838617  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:35.858342  158374 logs.go:282] 0 containers: []
	W1222 22:59:35.858358  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:35.858412  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:35.877344  158374 logs.go:282] 0 containers: []
	W1222 22:59:35.877362  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:35.877416  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:35.897833  158374 logs.go:282] 0 containers: []
	W1222 22:59:35.897848  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:35.897902  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:35.916409  158374 logs.go:282] 0 containers: []
	W1222 22:59:35.916428  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:35.916485  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:35.935688  158374 logs.go:282] 0 containers: []
	W1222 22:59:35.935705  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:35.935766  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:35.954858  158374 logs.go:282] 0 containers: []
	W1222 22:59:35.954876  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:35.954924  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:35.973729  158374 logs.go:282] 0 containers: []
	W1222 22:59:35.973746  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:35.973757  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:35.973767  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:36.002045  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:36.002069  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:36.029933  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:36.029949  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:36.075963  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:36.075988  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:36.091711  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:36.091734  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:36.147521  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:36.140402   22889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:36.141008   22889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:36.142529   22889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:36.142962   22889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:36.144445   22889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:36.140402   22889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:36.141008   22889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:36.142529   22889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:36.142962   22889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:36.144445   22889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:38.649172  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:38.660310  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:38.679380  158374 logs.go:282] 0 containers: []
	W1222 22:59:38.679396  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:38.679449  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:38.698305  158374 logs.go:282] 0 containers: []
	W1222 22:59:38.698318  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:38.698365  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:38.717524  158374 logs.go:282] 0 containers: []
	W1222 22:59:38.717541  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:38.717601  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:38.736808  158374 logs.go:282] 0 containers: []
	W1222 22:59:38.736822  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:38.736874  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:38.756003  158374 logs.go:282] 0 containers: []
	W1222 22:59:38.756017  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:38.756061  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:38.774845  158374 logs.go:282] 0 containers: []
	W1222 22:59:38.774858  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:38.774901  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:38.793240  158374 logs.go:282] 0 containers: []
	W1222 22:59:38.793257  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:38.793269  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:38.793281  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:38.821390  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:38.821407  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:38.868649  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:38.868671  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:38.884729  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:38.884749  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:38.940189  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:38.933255   23042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:38.933828   23042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:38.935320   23042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:38.935747   23042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:38.937211   23042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:38.933255   23042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:38.933828   23042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:38.935320   23042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:38.935747   23042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:38.937211   23042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:38.940200  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:38.940211  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:41.470854  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:41.481957  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:41.501032  158374 logs.go:282] 0 containers: []
	W1222 22:59:41.501051  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:41.501102  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:41.522720  158374 logs.go:282] 0 containers: []
	W1222 22:59:41.522740  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:41.522799  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:41.544756  158374 logs.go:282] 0 containers: []
	W1222 22:59:41.544769  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:41.544812  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:41.564773  158374 logs.go:282] 0 containers: []
	W1222 22:59:41.564789  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:41.565312  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:41.586087  158374 logs.go:282] 0 containers: []
	W1222 22:59:41.586104  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:41.586156  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:41.604141  158374 logs.go:282] 0 containers: []
	W1222 22:59:41.604156  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:41.604206  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:41.623828  158374 logs.go:282] 0 containers: []
	W1222 22:59:41.623846  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:41.623858  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:41.623870  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:41.652778  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:41.652798  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:41.680995  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:41.681014  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:41.728777  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:41.728800  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:41.744897  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:41.744916  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:41.800644  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:41.793494   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:41.794046   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:41.795564   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:41.796035   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:41.797584   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:41.793494   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:41.794046   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:41.795564   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:41.796035   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:41.797584   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:44.302472  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:44.313688  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:44.333253  158374 logs.go:282] 0 containers: []
	W1222 22:59:44.333267  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:44.333313  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:44.352778  158374 logs.go:282] 0 containers: []
	W1222 22:59:44.352793  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:44.352851  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:44.372079  158374 logs.go:282] 0 containers: []
	W1222 22:59:44.372093  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:44.372135  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:44.390683  158374 logs.go:282] 0 containers: []
	W1222 22:59:44.390701  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:44.390761  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:44.409168  158374 logs.go:282] 0 containers: []
	W1222 22:59:44.409185  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:44.409259  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:44.426368  158374 logs.go:282] 0 containers: []
	W1222 22:59:44.426381  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:44.426426  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:44.444108  158374 logs.go:282] 0 containers: []
	W1222 22:59:44.444124  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:44.444138  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:44.444148  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:44.481663  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:44.481679  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:44.529101  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:44.529121  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:44.546062  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:44.546081  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:44.600660  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:44.593909   23362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:44.594437   23362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:44.595959   23362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:44.596345   23362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:44.597904   23362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:44.593909   23362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:44.594437   23362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:44.595959   23362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:44.596345   23362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:44.597904   23362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:44.600672  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:44.600684  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:47.129588  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:47.140641  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:47.159435  158374 logs.go:282] 0 containers: []
	W1222 22:59:47.159453  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:47.159498  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:47.178540  158374 logs.go:282] 0 containers: []
	W1222 22:59:47.178560  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:47.178634  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:47.198365  158374 logs.go:282] 0 containers: []
	W1222 22:59:47.198383  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:47.198438  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:47.217411  158374 logs.go:282] 0 containers: []
	W1222 22:59:47.217429  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:47.217479  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:47.236273  158374 logs.go:282] 0 containers: []
	W1222 22:59:47.236287  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:47.236330  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:47.255917  158374 logs.go:282] 0 containers: []
	W1222 22:59:47.255930  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:47.255973  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:47.274750  158374 logs.go:282] 0 containers: []
	W1222 22:59:47.274768  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:47.274779  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:47.274792  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:47.322428  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:47.322452  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:47.339666  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:47.339691  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:47.396552  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:47.389135   23492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:47.389730   23492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:47.391303   23492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:47.391736   23492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:47.393254   23492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:47.389135   23492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:47.389730   23492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:47.391303   23492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:47.391736   23492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:47.393254   23492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:47.396562  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:47.396574  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:47.425768  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:47.425785  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:49.955844  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:49.966834  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:49.985390  158374 logs.go:282] 0 containers: []
	W1222 22:59:49.985405  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:49.985446  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:50.003669  158374 logs.go:282] 0 containers: []
	W1222 22:59:50.003687  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:50.003735  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:50.023188  158374 logs.go:282] 0 containers: []
	W1222 22:59:50.023203  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:50.023254  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:50.042292  158374 logs.go:282] 0 containers: []
	W1222 22:59:50.042309  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:50.042360  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:50.060457  158374 logs.go:282] 0 containers: []
	W1222 22:59:50.060471  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:50.060516  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:50.078548  158374 logs.go:282] 0 containers: []
	W1222 22:59:50.078565  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:50.078666  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:50.096685  158374 logs.go:282] 0 containers: []
	W1222 22:59:50.096704  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:50.096717  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:50.096730  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:50.125658  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:50.125680  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:50.173107  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:50.173124  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:50.188136  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:50.188152  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:50.242225  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:50.235426   23662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:50.235931   23662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:50.237475   23662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:50.237960   23662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:50.239487   23662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:50.235426   23662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:50.235931   23662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:50.237475   23662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:50.237960   23662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:50.239487   23662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:50.242236  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:50.242246  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:52.771712  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:52.783330  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:52.802157  158374 logs.go:282] 0 containers: []
	W1222 22:59:52.802171  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:52.802219  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:52.820709  158374 logs.go:282] 0 containers: []
	W1222 22:59:52.820726  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:52.820777  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:52.839433  158374 logs.go:282] 0 containers: []
	W1222 22:59:52.839448  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:52.839515  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:52.857834  158374 logs.go:282] 0 containers: []
	W1222 22:59:52.857849  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:52.857903  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:52.875916  158374 logs.go:282] 0 containers: []
	W1222 22:59:52.875933  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:52.875977  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:52.893339  158374 logs.go:282] 0 containers: []
	W1222 22:59:52.893351  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:52.893394  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:52.911298  158374 logs.go:282] 0 containers: []
	W1222 22:59:52.911311  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:52.911319  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:52.911329  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:52.942377  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:52.942392  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:52.969572  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:52.969587  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:53.014323  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:53.014339  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:53.029751  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:53.029764  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:53.085527  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:53.078407   23822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:53.078987   23822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:53.080619   23822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:53.081049   23822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:53.082721   23822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:53.078407   23822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:53.078987   23822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:53.080619   23822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:53.081049   23822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:53.082721   23822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:55.587247  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:55.598436  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:55.617688  158374 logs.go:282] 0 containers: []
	W1222 22:59:55.617704  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:55.617764  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:55.637510  158374 logs.go:282] 0 containers: []
	W1222 22:59:55.637528  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:55.637585  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:55.656117  158374 logs.go:282] 0 containers: []
	W1222 22:59:55.656132  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:55.656187  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:55.675258  158374 logs.go:282] 0 containers: []
	W1222 22:59:55.675278  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:55.675327  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:55.694537  158374 logs.go:282] 0 containers: []
	W1222 22:59:55.694555  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:55.694627  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:55.711993  158374 logs.go:282] 0 containers: []
	W1222 22:59:55.712011  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:55.712056  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:55.730198  158374 logs.go:282] 0 containers: []
	W1222 22:59:55.730216  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:55.730228  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:55.730242  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:55.795390  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:55.795416  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:55.811790  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:55.811809  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:55.867201  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:55.859754   23959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:55.860414   23959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:55.862051   23959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:55.862474   23959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:55.863985   23959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:55.859754   23959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:55.860414   23959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:55.862051   23959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:55.862474   23959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:55.863985   23959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:55.867213  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:55.867224  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:55.898358  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:55.898381  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:58.428962  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:58.440024  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:58.459773  158374 logs.go:282] 0 containers: []
	W1222 22:59:58.459787  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:58.459828  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:58.478843  158374 logs.go:282] 0 containers: []
	W1222 22:59:58.478863  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:58.478920  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:58.498503  158374 logs.go:282] 0 containers: []
	W1222 22:59:58.498518  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:58.498563  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:58.518032  158374 logs.go:282] 0 containers: []
	W1222 22:59:58.518052  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:58.518110  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:58.537315  158374 logs.go:282] 0 containers: []
	W1222 22:59:58.537330  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:58.537388  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:58.556299  158374 logs.go:282] 0 containers: []
	W1222 22:59:58.556319  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:58.556368  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:58.575345  158374 logs.go:282] 0 containers: []
	W1222 22:59:58.575359  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:58.575369  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:58.575378  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:58.603490  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:58.603508  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:58.651589  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:58.651620  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:58.667341  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:58.667358  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:58.723840  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:58.716532   24120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:58.717054   24120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:58.718649   24120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:58.719098   24120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:58.720749   24120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:58.716532   24120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:58.717054   24120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:58.718649   24120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:58.719098   24120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:58.720749   24120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:58.723855  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:58.723865  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:01.257052  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:01.268153  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:01.287939  158374 logs.go:282] 0 containers: []
	W1222 23:00:01.287954  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:01.288001  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:01.306844  158374 logs.go:282] 0 containers: []
	W1222 23:00:01.306857  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:01.306904  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:01.326511  158374 logs.go:282] 0 containers: []
	W1222 23:00:01.326530  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:01.326579  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:01.345734  158374 logs.go:282] 0 containers: []
	W1222 23:00:01.345748  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:01.345793  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:01.364619  158374 logs.go:282] 0 containers: []
	W1222 23:00:01.364634  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:01.364682  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:01.383578  158374 logs.go:282] 0 containers: []
	W1222 23:00:01.383605  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:01.383654  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:01.401753  158374 logs.go:282] 0 containers: []
	W1222 23:00:01.401770  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:01.401781  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:01.401795  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:01.457583  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:01.450373   24259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:01.450906   24259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:01.452493   24259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:01.452946   24259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:01.454493   24259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:01.450373   24259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:01.450906   24259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:01.452493   24259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:01.452946   24259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:01.454493   24259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:01.457611  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:01.457625  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:01.486870  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:01.486891  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:01.514587  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:01.514619  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:01.561028  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:01.561052  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:04.078615  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:04.089843  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:04.109432  158374 logs.go:282] 0 containers: []
	W1222 23:00:04.109450  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:04.109498  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:04.128585  158374 logs.go:282] 0 containers: []
	W1222 23:00:04.128630  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:04.128680  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:04.147830  158374 logs.go:282] 0 containers: []
	W1222 23:00:04.147846  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:04.147901  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:04.166672  158374 logs.go:282] 0 containers: []
	W1222 23:00:04.166686  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:04.166730  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:04.185500  158374 logs.go:282] 0 containers: []
	W1222 23:00:04.185523  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:04.185574  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:04.204345  158374 logs.go:282] 0 containers: []
	W1222 23:00:04.204360  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:04.204404  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:04.222488  158374 logs.go:282] 0 containers: []
	W1222 23:00:04.222503  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:04.222513  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:04.222523  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:04.252225  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:04.252244  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:04.280489  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:04.280507  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:04.329635  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:04.329657  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:04.345631  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:04.345650  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:04.400851  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:04.393656   24445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:04.394230   24445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:04.395887   24445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:04.396281   24445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:04.397838   24445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:04.393656   24445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:04.394230   24445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:04.395887   24445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:04.396281   24445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:04.397838   24445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:06.901498  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:06.913084  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:06.932724  158374 logs.go:282] 0 containers: []
	W1222 23:00:06.932739  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:06.932793  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:06.951127  158374 logs.go:282] 0 containers: []
	W1222 23:00:06.951146  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:06.951187  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:06.969488  158374 logs.go:282] 0 containers: []
	W1222 23:00:06.969501  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:06.969543  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:06.987763  158374 logs.go:282] 0 containers: []
	W1222 23:00:06.987780  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:06.987824  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:07.005884  158374 logs.go:282] 0 containers: []
	W1222 23:00:07.005900  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:07.005951  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:07.026370  158374 logs.go:282] 0 containers: []
	W1222 23:00:07.026397  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:07.026449  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:07.047472  158374 logs.go:282] 0 containers: []
	W1222 23:00:07.047486  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:07.047496  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:07.047505  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:07.092662  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:07.092679  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:07.107657  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:07.107672  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:07.162182  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:07.155104   24590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:07.155706   24590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:07.157237   24590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:07.157737   24590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:07.159269   24590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:07.155104   24590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:07.155706   24590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:07.157237   24590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:07.157737   24590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:07.159269   24590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:07.162193  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:07.162203  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:07.190466  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:07.190482  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:09.719767  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:09.730961  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:09.750004  158374 logs.go:282] 0 containers: []
	W1222 23:00:09.750021  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:09.750061  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:09.768191  158374 logs.go:282] 0 containers: []
	W1222 23:00:09.768203  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:09.768240  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:09.785655  158374 logs.go:282] 0 containers: []
	W1222 23:00:09.785668  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:09.785715  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:09.803931  158374 logs.go:282] 0 containers: []
	W1222 23:00:09.803946  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:09.803987  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:09.823040  158374 logs.go:282] 0 containers: []
	W1222 23:00:09.823058  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:09.823105  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:09.841359  158374 logs.go:282] 0 containers: []
	W1222 23:00:09.841373  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:09.841413  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:09.859786  158374 logs.go:282] 0 containers: []
	W1222 23:00:09.859799  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:09.859812  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:09.859824  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:09.905428  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:09.905445  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:09.920496  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:09.920511  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:09.974948  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:09.968124   24736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:09.968667   24736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:09.970182   24736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:09.970583   24736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:09.972119   24736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:09.968124   24736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:09.968667   24736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:09.970182   24736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:09.970583   24736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:09.972119   24736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:09.974969  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:09.974982  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:10.003466  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:10.003485  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:12.535644  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:12.546867  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:12.565761  158374 logs.go:282] 0 containers: []
	W1222 23:00:12.565778  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:12.565825  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:12.584431  158374 logs.go:282] 0 containers: []
	W1222 23:00:12.584446  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:12.584504  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:12.602950  158374 logs.go:282] 0 containers: []
	W1222 23:00:12.602966  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:12.603009  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:12.621210  158374 logs.go:282] 0 containers: []
	W1222 23:00:12.621224  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:12.621268  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:12.639377  158374 logs.go:282] 0 containers: []
	W1222 23:00:12.639393  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:12.639444  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:12.657924  158374 logs.go:282] 0 containers: []
	W1222 23:00:12.657941  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:12.657984  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:12.676311  158374 logs.go:282] 0 containers: []
	W1222 23:00:12.676326  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:12.676336  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:12.676346  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:12.703500  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:12.703515  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:12.750933  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:12.750951  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:12.766856  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:12.766870  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:12.822138  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:12.815289   24904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:12.815808   24904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:12.817339   24904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:12.817801   24904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:12.819283   24904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:12.815289   24904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:12.815808   24904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:12.817339   24904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:12.817801   24904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:12.819283   24904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:12.822170  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:12.822269  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:15.355685  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:15.366722  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:15.385319  158374 logs.go:282] 0 containers: []
	W1222 23:00:15.385334  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:15.385401  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:15.402653  158374 logs.go:282] 0 containers: []
	W1222 23:00:15.402666  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:15.402712  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:15.420695  158374 logs.go:282] 0 containers: []
	W1222 23:00:15.420709  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:15.420757  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:15.438422  158374 logs.go:282] 0 containers: []
	W1222 23:00:15.438438  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:15.438488  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:15.457961  158374 logs.go:282] 0 containers: []
	W1222 23:00:15.457978  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:15.458023  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:15.477016  158374 logs.go:282] 0 containers: []
	W1222 23:00:15.477031  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:15.477075  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:15.495320  158374 logs.go:282] 0 containers: []
	W1222 23:00:15.495335  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:15.495346  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:15.495363  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:15.542697  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:15.542716  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:15.557986  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:15.558002  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:15.613071  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:15.605742   25048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:15.606273   25048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:15.607863   25048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:15.608322   25048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:15.609898   25048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:15.605742   25048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:15.606273   25048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:15.607863   25048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:15.608322   25048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:15.609898   25048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:15.613082  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:15.613093  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:15.643893  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:15.643912  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:18.176478  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:18.187435  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:18.206820  158374 logs.go:282] 0 containers: []
	W1222 23:00:18.206836  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:18.206885  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:18.225162  158374 logs.go:282] 0 containers: []
	W1222 23:00:18.225179  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:18.225242  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:18.244089  158374 logs.go:282] 0 containers: []
	W1222 23:00:18.244106  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:18.244149  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:18.263582  158374 logs.go:282] 0 containers: []
	W1222 23:00:18.263618  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:18.263678  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:18.285421  158374 logs.go:282] 0 containers: []
	W1222 23:00:18.285439  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:18.285483  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:18.304575  158374 logs.go:282] 0 containers: []
	W1222 23:00:18.304616  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:18.304679  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:18.322814  158374 logs.go:282] 0 containers: []
	W1222 23:00:18.322831  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:18.322842  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:18.322853  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:18.367678  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:18.367695  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:18.384038  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:18.384060  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:18.439158  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:18.432401   25210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:18.432947   25210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:18.434455   25210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:18.434840   25210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:18.436266   25210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:18.432401   25210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:18.432947   25210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:18.434455   25210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:18.434840   25210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:18.436266   25210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:18.439172  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:18.439186  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:18.468274  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:18.468290  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:20.996786  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:21.007676  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:21.026577  158374 logs.go:282] 0 containers: []
	W1222 23:00:21.026589  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:21.026662  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:21.045179  158374 logs.go:282] 0 containers: []
	W1222 23:00:21.045195  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:21.045237  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:21.064216  158374 logs.go:282] 0 containers: []
	W1222 23:00:21.064230  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:21.064278  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:21.082929  158374 logs.go:282] 0 containers: []
	W1222 23:00:21.082946  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:21.082991  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:21.101298  158374 logs.go:282] 0 containers: []
	W1222 23:00:21.101314  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:21.101372  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:21.119708  158374 logs.go:282] 0 containers: []
	W1222 23:00:21.119719  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:21.119759  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:21.137828  158374 logs.go:282] 0 containers: []
	W1222 23:00:21.137841  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:21.137849  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:21.137859  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:21.167198  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:21.167214  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:21.194956  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:21.194974  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:21.243666  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:21.243687  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:21.259092  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:21.259108  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:21.316128  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:21.309163   25383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:21.309711   25383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:21.311266   25383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:21.311721   25383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:21.313183   25383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:21.309163   25383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:21.309711   25383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:21.311266   25383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:21.311721   25383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:21.313183   25383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:23.817830  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:23.829010  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:23.847819  158374 logs.go:282] 0 containers: []
	W1222 23:00:23.847833  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:23.847883  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:23.866626  158374 logs.go:282] 0 containers: []
	W1222 23:00:23.866640  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:23.866685  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:23.884038  158374 logs.go:282] 0 containers: []
	W1222 23:00:23.884053  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:23.884099  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:23.903021  158374 logs.go:282] 0 containers: []
	W1222 23:00:23.903037  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:23.903091  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:23.921758  158374 logs.go:282] 0 containers: []
	W1222 23:00:23.921771  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:23.921817  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:23.940118  158374 logs.go:282] 0 containers: []
	W1222 23:00:23.940135  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:23.940176  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:23.958805  158374 logs.go:282] 0 containers: []
	W1222 23:00:23.958817  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:23.958826  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:23.958836  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:24.006524  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:24.006542  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:24.021579  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:24.021602  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:24.077965  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:24.070852   25512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:24.071400   25512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:24.072995   25512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:24.073395   25512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:24.074970   25512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:24.070852   25512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:24.071400   25512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:24.072995   25512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:24.073395   25512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:24.074970   25512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:24.077976  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:24.077986  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:24.107448  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:24.107464  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:26.635419  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:26.646546  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:26.665787  158374 logs.go:282] 0 containers: []
	W1222 23:00:26.665805  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:26.665856  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:26.683869  158374 logs.go:282] 0 containers: []
	W1222 23:00:26.683885  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:26.683930  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:26.702549  158374 logs.go:282] 0 containers: []
	W1222 23:00:26.702565  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:26.702628  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:26.720884  158374 logs.go:282] 0 containers: []
	W1222 23:00:26.720901  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:26.720947  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:26.739437  158374 logs.go:282] 0 containers: []
	W1222 23:00:26.739453  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:26.739498  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:26.757871  158374 logs.go:282] 0 containers: []
	W1222 23:00:26.757885  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:26.757927  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:26.775863  158374 logs.go:282] 0 containers: []
	W1222 23:00:26.775882  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:26.775893  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:26.775902  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:26.821886  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:26.821903  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:26.837204  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:26.837220  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:26.891970  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:26.884764   25667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:26.885231   25667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:26.886895   25667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:26.887281   25667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:26.888844   25667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:26.884764   25667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:26.885231   25667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:26.886895   25667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:26.887281   25667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:26.888844   25667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:26.891981  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:26.891991  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:26.922932  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:26.922949  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:29.452400  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:29.463551  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:29.482265  158374 logs.go:282] 0 containers: []
	W1222 23:00:29.482278  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:29.482326  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:29.501689  158374 logs.go:282] 0 containers: []
	W1222 23:00:29.501707  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:29.501762  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:29.522730  158374 logs.go:282] 0 containers: []
	W1222 23:00:29.522747  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:29.522799  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:29.542657  158374 logs.go:282] 0 containers: []
	W1222 23:00:29.542671  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:29.542720  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:29.560883  158374 logs.go:282] 0 containers: []
	W1222 23:00:29.560897  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:29.560938  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:29.579281  158374 logs.go:282] 0 containers: []
	W1222 23:00:29.579297  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:29.579340  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:29.597740  158374 logs.go:282] 0 containers: []
	W1222 23:00:29.597755  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:29.597766  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:29.597777  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:29.627231  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:29.627248  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:29.655168  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:29.655183  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:29.703330  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:29.703348  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:29.718800  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:29.718821  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:29.773515  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:29.766735   25841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:29.767220   25841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:29.768799   25841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:29.769197   25841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:29.770715   25841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:29.766735   25841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:29.767220   25841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:29.768799   25841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:29.769197   25841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:29.770715   25841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:32.274411  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:32.285356  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:32.304409  158374 logs.go:282] 0 containers: []
	W1222 23:00:32.304423  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:32.304465  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:32.324167  158374 logs.go:282] 0 containers: []
	W1222 23:00:32.324183  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:32.324228  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:32.342878  158374 logs.go:282] 0 containers: []
	W1222 23:00:32.342893  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:32.342950  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:32.362212  158374 logs.go:282] 0 containers: []
	W1222 23:00:32.362226  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:32.362268  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:32.381154  158374 logs.go:282] 0 containers: []
	W1222 23:00:32.381171  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:32.381229  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:32.400512  158374 logs.go:282] 0 containers: []
	W1222 23:00:32.400533  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:32.400587  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:32.419083  158374 logs.go:282] 0 containers: []
	W1222 23:00:32.419097  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:32.419112  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:32.419121  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:32.466805  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:32.466824  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:32.482931  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:32.482947  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:32.543407  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:32.536205   25971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:32.536750   25971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:32.538320   25971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:32.538796   25971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:32.540418   25971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:32.536205   25971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:32.536750   25971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:32.538320   25971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:32.538796   25971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:32.540418   25971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:32.543419  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:32.543436  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:32.572975  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:32.572990  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:35.102503  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:35.113391  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:35.132097  158374 logs.go:282] 0 containers: []
	W1222 23:00:35.132109  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:35.132151  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:35.150359  158374 logs.go:282] 0 containers: []
	W1222 23:00:35.150378  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:35.150441  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:35.169070  158374 logs.go:282] 0 containers: []
	W1222 23:00:35.169088  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:35.169141  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:35.187626  158374 logs.go:282] 0 containers: []
	W1222 23:00:35.187641  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:35.187686  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:35.205837  158374 logs.go:282] 0 containers: []
	W1222 23:00:35.205854  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:35.205895  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:35.224198  158374 logs.go:282] 0 containers: []
	W1222 23:00:35.224213  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:35.224255  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:35.241750  158374 logs.go:282] 0 containers: []
	W1222 23:00:35.241765  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:35.241774  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:35.241783  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:35.286130  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:35.286145  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:35.301096  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:35.301111  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:35.356973  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:35.349818   26124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:35.350427   26124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:35.351993   26124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:35.352503   26124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:35.353993   26124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:35.349818   26124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:35.350427   26124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:35.351993   26124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:35.352503   26124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:35.353993   26124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:35.356986  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:35.356997  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:35.385504  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:35.385523  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:37.914534  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:37.925443  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:37.943928  158374 logs.go:282] 0 containers: []
	W1222 23:00:37.943942  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:37.943990  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:37.961408  158374 logs.go:282] 0 containers: []
	W1222 23:00:37.961424  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:37.961481  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:37.979364  158374 logs.go:282] 0 containers: []
	W1222 23:00:37.979380  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:37.979437  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:37.997737  158374 logs.go:282] 0 containers: []
	W1222 23:00:37.997751  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:37.997796  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:38.016341  158374 logs.go:282] 0 containers: []
	W1222 23:00:38.016358  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:38.016425  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:38.035203  158374 logs.go:282] 0 containers: []
	W1222 23:00:38.035221  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:38.035270  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:38.053655  158374 logs.go:282] 0 containers: []
	W1222 23:00:38.053672  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:38.053684  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:38.053699  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:38.100003  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:38.100022  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:38.116642  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:38.116661  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:38.170897  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:38.164046   26282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:38.164628   26282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:38.166176   26282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:38.166562   26282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:38.168042   26282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:38.164046   26282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:38.164628   26282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:38.166176   26282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:38.166562   26282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:38.168042   26282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:38.170907  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:38.170921  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:38.200254  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:38.200273  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:40.729556  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:40.740911  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:40.762071  158374 logs.go:282] 0 containers: []
	W1222 23:00:40.762085  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:40.762131  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:40.782191  158374 logs.go:282] 0 containers: []
	W1222 23:00:40.782207  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:40.782259  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:40.802303  158374 logs.go:282] 0 containers: []
	W1222 23:00:40.802318  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:40.802365  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:40.821101  158374 logs.go:282] 0 containers: []
	W1222 23:00:40.821115  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:40.821159  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:40.839813  158374 logs.go:282] 0 containers: []
	W1222 23:00:40.839830  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:40.839880  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:40.859473  158374 logs.go:282] 0 containers: []
	W1222 23:00:40.859490  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:40.859546  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:40.877060  158374 logs.go:282] 0 containers: []
	W1222 23:00:40.877076  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:40.877088  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:40.877101  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:40.922835  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:40.922852  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:40.938346  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:40.938361  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:40.993518  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:40.986693   26438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:40.987159   26438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:40.988687   26438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:40.989134   26438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:40.990683   26438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:40.986693   26438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:40.987159   26438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:40.988687   26438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:40.989134   26438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:40.990683   26438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:40.993531  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:40.993542  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:41.023093  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:41.023109  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:43.552889  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:43.564108  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:43.582897  158374 logs.go:282] 0 containers: []
	W1222 23:00:43.582914  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:43.582969  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:43.601736  158374 logs.go:282] 0 containers: []
	W1222 23:00:43.601750  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:43.601793  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:43.620069  158374 logs.go:282] 0 containers: []
	W1222 23:00:43.620083  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:43.620126  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:43.638251  158374 logs.go:282] 0 containers: []
	W1222 23:00:43.638269  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:43.638335  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:43.656816  158374 logs.go:282] 0 containers: []
	W1222 23:00:43.656829  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:43.656878  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:43.675289  158374 logs.go:282] 0 containers: []
	W1222 23:00:43.675302  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:43.675354  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:43.693789  158374 logs.go:282] 0 containers: []
	W1222 23:00:43.693805  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:43.693817  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:43.693828  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:43.749062  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:43.742036   26577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:43.742643   26577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:43.744184   26577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:43.744673   26577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:43.746198   26577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:43.742036   26577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:43.742643   26577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:43.744184   26577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:43.744673   26577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:43.746198   26577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:43.749078  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:43.749091  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:43.779321  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:43.779346  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:43.808611  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:43.808632  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:43.853216  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:43.853235  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:46.369073  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:46.380061  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:46.398781  158374 logs.go:282] 0 containers: []
	W1222 23:00:46.398797  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:46.398851  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:46.416817  158374 logs.go:282] 0 containers: []
	W1222 23:00:46.416834  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:46.416877  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:46.434863  158374 logs.go:282] 0 containers: []
	W1222 23:00:46.434877  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:46.434923  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:46.453147  158374 logs.go:282] 0 containers: []
	W1222 23:00:46.453164  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:46.453208  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:46.471210  158374 logs.go:282] 0 containers: []
	W1222 23:00:46.471224  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:46.471272  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:46.489455  158374 logs.go:282] 0 containers: []
	W1222 23:00:46.489468  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:46.489517  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:46.508022  158374 logs.go:282] 0 containers: []
	W1222 23:00:46.508039  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:46.508050  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:46.508061  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:46.555488  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:46.555505  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:46.571399  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:46.571415  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:46.626809  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:46.620219   26740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:46.620751   26740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:46.622257   26740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:46.622660   26740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:46.624126   26740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:46.620219   26740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:46.620751   26740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:46.622257   26740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:46.622660   26740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:46.624126   26740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:46.626822  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:46.626834  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:46.656631  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:46.656648  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:49.197674  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:49.208617  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:49.228203  158374 logs.go:282] 0 containers: []
	W1222 23:00:49.228221  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:49.228267  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:49.246623  158374 logs.go:282] 0 containers: []
	W1222 23:00:49.246638  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:49.246676  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:49.264794  158374 logs.go:282] 0 containers: []
	W1222 23:00:49.264810  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:49.264861  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:49.283414  158374 logs.go:282] 0 containers: []
	W1222 23:00:49.283431  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:49.283480  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:49.301735  158374 logs.go:282] 0 containers: []
	W1222 23:00:49.301748  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:49.301787  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:49.320079  158374 logs.go:282] 0 containers: []
	W1222 23:00:49.320092  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:49.320134  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:49.339280  158374 logs.go:282] 0 containers: []
	W1222 23:00:49.339296  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:49.339308  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:49.339324  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:49.354937  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:49.354953  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:49.409733  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:49.402775   26901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:49.403250   26901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:49.404843   26901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:49.405306   26901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:49.406844   26901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:49.402775   26901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:49.403250   26901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:49.404843   26901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:49.405306   26901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:49.406844   26901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:49.409744  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:49.409753  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:49.438748  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:49.438765  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:49.466948  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:49.466964  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:52.015822  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:52.027799  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:52.047753  158374 logs.go:282] 0 containers: []
	W1222 23:00:52.047770  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:52.047825  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:52.067486  158374 logs.go:282] 0 containers: []
	W1222 23:00:52.067502  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:52.067557  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:52.086094  158374 logs.go:282] 0 containers: []
	W1222 23:00:52.086110  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:52.086158  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:52.105497  158374 logs.go:282] 0 containers: []
	W1222 23:00:52.105513  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:52.105560  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:52.123560  158374 logs.go:282] 0 containers: []
	W1222 23:00:52.123575  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:52.123641  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:52.141887  158374 logs.go:282] 0 containers: []
	W1222 23:00:52.141905  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:52.141957  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:52.160464  158374 logs.go:282] 0 containers: []
	W1222 23:00:52.160480  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:52.160491  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:52.160500  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:52.207605  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:52.207623  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:52.222700  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:52.222714  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:52.277875  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:52.270341   27059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:52.270915   27059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:52.272985   27059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:52.273548   27059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:52.275027   27059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:52.270341   27059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:52.270915   27059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:52.272985   27059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:52.273548   27059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:52.275027   27059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:52.277887  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:52.277899  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:52.307146  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:52.307163  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:54.836681  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:54.847631  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:54.866945  158374 logs.go:282] 0 containers: []
	W1222 23:00:54.866961  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:54.867018  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:54.885478  158374 logs.go:282] 0 containers: []
	W1222 23:00:54.885491  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:54.885540  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:54.904643  158374 logs.go:282] 0 containers: []
	W1222 23:00:54.904657  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:54.904701  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:54.923539  158374 logs.go:282] 0 containers: []
	W1222 23:00:54.923554  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:54.923615  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:54.941322  158374 logs.go:282] 0 containers: []
	W1222 23:00:54.941338  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:54.941399  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:54.961766  158374 logs.go:282] 0 containers: []
	W1222 23:00:54.961785  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:54.961839  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:54.981417  158374 logs.go:282] 0 containers: []
	W1222 23:00:54.981432  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:54.981442  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:54.981452  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:55.012286  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:55.012306  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:55.043682  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:55.043703  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:55.091444  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:55.091466  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:55.107015  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:55.107039  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:55.162701  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:55.155754   27231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:55.156265   27231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:55.157880   27231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:55.158349   27231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:55.159815   27231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:55.155754   27231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:55.156265   27231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:55.157880   27231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:55.158349   27231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:55.159815   27231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:57.664308  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:57.675489  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:57.692858  158374 logs.go:282] 0 containers: []
	W1222 23:00:57.692875  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:57.692935  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:57.712522  158374 logs.go:282] 0 containers: []
	W1222 23:00:57.712538  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:57.712607  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:57.732210  158374 logs.go:282] 0 containers: []
	W1222 23:00:57.732226  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:57.732269  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:57.751532  158374 logs.go:282] 0 containers: []
	W1222 23:00:57.751545  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:57.751602  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:57.772243  158374 logs.go:282] 0 containers: []
	W1222 23:00:57.772257  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:57.772301  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:57.791227  158374 logs.go:282] 0 containers: []
	W1222 23:00:57.791243  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:57.791304  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:57.810525  158374 logs.go:282] 0 containers: []
	W1222 23:00:57.810543  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:57.810552  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:57.810561  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:57.858495  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:57.858513  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:57.873762  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:57.873777  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:57.929650  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:57.922350   27355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:57.922945   27355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:57.924490   27355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:57.924968   27355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:57.926535   27355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:57.922350   27355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:57.922945   27355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:57.924490   27355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:57.924968   27355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:57.926535   27355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:57.929662  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:57.929672  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:57.960293  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:57.960310  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:00.491408  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:00.502843  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:00.522074  158374 logs.go:282] 0 containers: []
	W1222 23:01:00.522090  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:00.522138  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:00.540871  158374 logs.go:282] 0 containers: []
	W1222 23:01:00.540888  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:00.540945  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:00.558913  158374 logs.go:282] 0 containers: []
	W1222 23:01:00.558931  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:00.558975  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:00.577980  158374 logs.go:282] 0 containers: []
	W1222 23:01:00.577997  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:00.578050  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:00.597037  158374 logs.go:282] 0 containers: []
	W1222 23:01:00.597056  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:00.597104  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:00.615867  158374 logs.go:282] 0 containers: []
	W1222 23:01:00.615881  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:00.615924  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:00.634566  158374 logs.go:282] 0 containers: []
	W1222 23:01:00.634581  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:00.634609  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:00.634623  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:00.663403  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:00.663425  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:00.712341  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:00.712364  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:00.729099  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:00.729121  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:00.785283  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:00.778009   27527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:00.778541   27527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:00.780082   27527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:00.780546   27527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:00.782086   27527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:00.778009   27527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:00.778541   27527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:00.780082   27527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:00.780546   27527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:00.782086   27527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:00.785294  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:00.785307  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:03.318166  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:03.329041  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:03.347864  158374 logs.go:282] 0 containers: []
	W1222 23:01:03.347878  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:03.347940  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:03.366921  158374 logs.go:282] 0 containers: []
	W1222 23:01:03.366937  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:03.366991  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:03.384054  158374 logs.go:282] 0 containers: []
	W1222 23:01:03.384070  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:03.384117  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:03.403365  158374 logs.go:282] 0 containers: []
	W1222 23:01:03.403380  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:03.403432  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:03.422487  158374 logs.go:282] 0 containers: []
	W1222 23:01:03.422501  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:03.422556  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:03.440736  158374 logs.go:282] 0 containers: []
	W1222 23:01:03.440751  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:03.440805  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:03.459891  158374 logs.go:282] 0 containers: []
	W1222 23:01:03.459906  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:03.459915  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:03.459926  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:03.508624  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:03.508646  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:03.525926  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:03.525946  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:03.581788  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:03.574395   27664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:03.574955   27664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:03.576542   27664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:03.577046   27664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:03.578534   27664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:03.574395   27664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:03.574955   27664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:03.576542   27664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:03.577046   27664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:03.578534   27664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:03.581798  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:03.581809  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:03.610547  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:03.610564  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:06.140960  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:06.152054  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:06.171277  158374 logs.go:282] 0 containers: []
	W1222 23:01:06.171290  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:06.171346  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:06.190005  158374 logs.go:282] 0 containers: []
	W1222 23:01:06.190024  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:06.190073  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:06.209033  158374 logs.go:282] 0 containers: []
	W1222 23:01:06.209052  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:06.209119  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:06.228351  158374 logs.go:282] 0 containers: []
	W1222 23:01:06.228368  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:06.228424  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:06.248668  158374 logs.go:282] 0 containers: []
	W1222 23:01:06.248682  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:06.248737  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:06.269569  158374 logs.go:282] 0 containers: []
	W1222 23:01:06.269587  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:06.269662  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:06.291826  158374 logs.go:282] 0 containers: []
	W1222 23:01:06.291841  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:06.291851  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:06.291861  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:06.337818  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:06.337839  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:06.353395  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:06.353413  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:06.410648  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:06.403406   27819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:06.403979   27819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:06.405629   27819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:06.406063   27819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:06.407632   27819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:06.403406   27819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:06.403979   27819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:06.405629   27819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:06.406063   27819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:06.407632   27819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:06.410666  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:06.410681  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:06.440427  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:06.440447  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:08.971014  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:08.982172  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:09.001296  158374 logs.go:282] 0 containers: []
	W1222 23:01:09.001315  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:09.001377  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:09.021010  158374 logs.go:282] 0 containers: []
	W1222 23:01:09.021025  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:09.021065  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:09.039299  158374 logs.go:282] 0 containers: []
	W1222 23:01:09.039315  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:09.039361  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:09.059055  158374 logs.go:282] 0 containers: []
	W1222 23:01:09.059069  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:09.059119  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:09.079065  158374 logs.go:282] 0 containers: []
	W1222 23:01:09.079080  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:09.079123  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:09.098129  158374 logs.go:282] 0 containers: []
	W1222 23:01:09.098148  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:09.098194  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:09.116949  158374 logs.go:282] 0 containers: []
	W1222 23:01:09.116965  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:09.116974  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:09.116984  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:09.172761  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:09.165842   27961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:09.166366   27961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:09.167907   27961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:09.168380   27961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:09.169881   27961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:09.165842   27961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:09.166366   27961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:09.167907   27961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:09.168380   27961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:09.169881   27961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:09.172771  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:09.172782  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:09.203094  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:09.203115  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:09.231372  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:09.231389  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:09.279710  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:09.279730  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:11.797972  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:11.809159  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:11.828406  158374 logs.go:282] 0 containers: []
	W1222 23:01:11.828423  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:11.828474  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:11.847173  158374 logs.go:282] 0 containers: []
	W1222 23:01:11.847194  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:11.847248  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:11.866123  158374 logs.go:282] 0 containers: []
	W1222 23:01:11.866141  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:11.866191  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:11.885374  158374 logs.go:282] 0 containers: []
	W1222 23:01:11.885388  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:11.885428  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:11.904372  158374 logs.go:282] 0 containers: []
	W1222 23:01:11.904386  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:11.904429  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:11.923420  158374 logs.go:282] 0 containers: []
	W1222 23:01:11.923437  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:11.923496  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:11.942294  158374 logs.go:282] 0 containers: []
	W1222 23:01:11.942332  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:11.942344  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:11.942356  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:11.999449  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:11.992239   28121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:11.992816   28121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:11.994388   28121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:11.994771   28121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:11.996307   28121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:11.992239   28121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:11.992816   28121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:11.994388   28121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:11.994771   28121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:11.996307   28121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:11.999465  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:11.999478  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:12.029498  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:12.029524  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:12.058708  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:12.058726  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:12.106818  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:12.106837  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:14.624265  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:14.635338  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:14.654466  158374 logs.go:282] 0 containers: []
	W1222 23:01:14.654481  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:14.654523  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:14.674801  158374 logs.go:282] 0 containers: []
	W1222 23:01:14.674817  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:14.674860  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:14.694258  158374 logs.go:282] 0 containers: []
	W1222 23:01:14.694275  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:14.694322  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:14.714916  158374 logs.go:282] 0 containers: []
	W1222 23:01:14.714932  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:14.714980  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:14.734141  158374 logs.go:282] 0 containers: []
	W1222 23:01:14.734155  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:14.734198  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:14.754093  158374 logs.go:282] 0 containers: []
	W1222 23:01:14.754108  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:14.754162  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:14.773451  158374 logs.go:282] 0 containers: []
	W1222 23:01:14.773468  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:14.773481  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:14.773496  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:14.830750  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:14.823428   28278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:14.824043   28278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:14.825689   28278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:14.826129   28278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:14.827703   28278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:14.823428   28278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:14.824043   28278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:14.825689   28278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:14.826129   28278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:14.827703   28278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:14.830760  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:14.830770  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:14.859787  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:14.859804  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:14.888611  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:14.888631  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:14.936097  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:14.936118  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:17.453191  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:17.464732  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:17.485010  158374 logs.go:282] 0 containers: []
	W1222 23:01:17.485025  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:17.485072  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:17.505952  158374 logs.go:282] 0 containers: []
	W1222 23:01:17.505969  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:17.506027  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:17.528761  158374 logs.go:282] 0 containers: []
	W1222 23:01:17.528776  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:17.528821  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:17.549296  158374 logs.go:282] 0 containers: []
	W1222 23:01:17.549312  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:17.549376  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:17.568100  158374 logs.go:282] 0 containers: []
	W1222 23:01:17.568117  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:17.568167  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:17.587017  158374 logs.go:282] 0 containers: []
	W1222 23:01:17.587034  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:17.587086  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:17.606020  158374 logs.go:282] 0 containers: []
	W1222 23:01:17.606036  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:17.606045  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:17.606061  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:17.621414  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:17.621430  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:17.677909  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:17.670771   28438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:17.671262   28438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:17.672895   28438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:17.673340   28438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:17.674869   28438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:17.670771   28438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:17.671262   28438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:17.672895   28438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:17.673340   28438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:17.674869   28438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:17.677925  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:17.677935  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:17.708117  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:17.708138  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:17.739554  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:17.739574  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:20.288948  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:20.299810  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:20.318929  158374 logs.go:282] 0 containers: []
	W1222 23:01:20.318945  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:20.319006  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:20.337556  158374 logs.go:282] 0 containers: []
	W1222 23:01:20.337573  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:20.337641  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:20.355705  158374 logs.go:282] 0 containers: []
	W1222 23:01:20.355718  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:20.355760  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:20.373672  158374 logs.go:282] 0 containers: []
	W1222 23:01:20.373686  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:20.373726  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:20.392616  158374 logs.go:282] 0 containers: []
	W1222 23:01:20.392631  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:20.392674  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:20.411253  158374 logs.go:282] 0 containers: []
	W1222 23:01:20.411270  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:20.411322  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:20.429537  158374 logs.go:282] 0 containers: []
	W1222 23:01:20.429552  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:20.429563  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:20.429575  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:20.445080  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:20.445098  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:20.501506  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:20.494080   28582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:20.494697   28582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:20.496376   28582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:20.496836   28582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:20.498365   28582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:20.494080   28582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:20.494697   28582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:20.496376   28582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:20.496836   28582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:20.498365   28582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:20.501520  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:20.501535  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:20.531907  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:20.531925  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:20.560530  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:20.560547  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:23.108846  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:23.119974  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:23.138613  158374 logs.go:282] 0 containers: []
	W1222 23:01:23.138631  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:23.138686  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:23.156921  158374 logs.go:282] 0 containers: []
	W1222 23:01:23.156935  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:23.156975  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:23.175153  158374 logs.go:282] 0 containers: []
	W1222 23:01:23.175166  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:23.175209  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:23.193263  158374 logs.go:282] 0 containers: []
	W1222 23:01:23.193295  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:23.193356  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:23.212207  158374 logs.go:282] 0 containers: []
	W1222 23:01:23.212221  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:23.212263  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:23.230929  158374 logs.go:282] 0 containers: []
	W1222 23:01:23.230945  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:23.231005  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:23.249646  158374 logs.go:282] 0 containers: []
	W1222 23:01:23.249659  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:23.249669  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:23.249679  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:23.277729  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:23.277745  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:23.324063  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:23.324082  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:23.339406  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:23.339425  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:23.394875  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:23.387312   28761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:23.387859   28761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:23.389847   28761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:23.390576   28761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:23.392042   28761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:23.387312   28761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:23.387859   28761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:23.389847   28761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:23.390576   28761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:23.392042   28761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:23.394885  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:23.394895  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:25.926075  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:25.937168  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:25.956016  158374 logs.go:282] 0 containers: []
	W1222 23:01:25.956028  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:25.956074  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:25.974143  158374 logs.go:282] 0 containers: []
	W1222 23:01:25.974159  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:25.974202  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:25.992434  158374 logs.go:282] 0 containers: []
	W1222 23:01:25.992449  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:25.992505  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:26.010399  158374 logs.go:282] 0 containers: []
	W1222 23:01:26.010415  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:26.010474  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:26.029958  158374 logs.go:282] 0 containers: []
	W1222 23:01:26.029974  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:26.030039  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:26.048962  158374 logs.go:282] 0 containers: []
	W1222 23:01:26.048977  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:26.049032  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:26.067563  158374 logs.go:282] 0 containers: []
	W1222 23:01:26.067581  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:26.067605  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:26.067627  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:26.115800  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:26.115818  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:26.131143  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:26.131160  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:26.187038  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:26.179580   28901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:26.180759   28901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:26.181222   28901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:26.182757   28901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:26.183141   28901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:26.179580   28901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:26.180759   28901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:26.181222   28901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:26.182757   28901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:26.183141   28901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:26.187049  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:26.187059  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:26.217787  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:26.217807  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:28.746733  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:28.757902  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:28.777575  158374 logs.go:282] 0 containers: []
	W1222 23:01:28.777608  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:28.777664  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:28.798343  158374 logs.go:282] 0 containers: []
	W1222 23:01:28.798363  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:28.798420  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:28.816843  158374 logs.go:282] 0 containers: []
	W1222 23:01:28.816859  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:28.816904  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:28.835441  158374 logs.go:282] 0 containers: []
	W1222 23:01:28.835458  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:28.835507  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:28.855127  158374 logs.go:282] 0 containers: []
	W1222 23:01:28.855139  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:28.855195  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:28.873368  158374 logs.go:282] 0 containers: []
	W1222 23:01:28.873381  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:28.873422  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:28.890543  158374 logs.go:282] 0 containers: []
	W1222 23:01:28.890556  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:28.890565  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:28.890575  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:28.918533  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:28.918553  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:28.963368  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:28.963389  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:28.978933  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:28.978951  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:29.035020  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:29.027770   29072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:29.028286   29072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:29.029825   29072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:29.030294   29072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:29.031906   29072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:29.027770   29072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:29.028286   29072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:29.029825   29072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:29.030294   29072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:29.031906   29072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:29.035033  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:29.035044  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:31.565580  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:31.576645  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:31.596094  158374 logs.go:282] 0 containers: []
	W1222 23:01:31.596110  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:31.596173  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:31.615329  158374 logs.go:282] 0 containers: []
	W1222 23:01:31.615345  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:31.615394  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:31.634047  158374 logs.go:282] 0 containers: []
	W1222 23:01:31.634065  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:31.634117  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:31.653553  158374 logs.go:282] 0 containers: []
	W1222 23:01:31.653567  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:31.653626  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:31.671338  158374 logs.go:282] 0 containers: []
	W1222 23:01:31.671354  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:31.671411  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:31.689466  158374 logs.go:282] 0 containers: []
	W1222 23:01:31.689482  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:31.689536  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:31.707731  158374 logs.go:282] 0 containers: []
	W1222 23:01:31.707744  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:31.707753  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:31.707761  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:31.759802  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:31.759828  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:31.776809  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:31.776828  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:31.833983  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:31.827071   29214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:31.827553   29214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:31.829051   29214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:31.829424   29214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:31.830959   29214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:31.827071   29214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:31.827553   29214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:31.829051   29214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:31.829424   29214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:31.830959   29214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:31.833996  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:31.834010  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:31.862984  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:31.863002  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:34.392435  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:34.403543  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:34.422799  158374 logs.go:282] 0 containers: []
	W1222 23:01:34.422814  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:34.422857  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:34.442014  158374 logs.go:282] 0 containers: []
	W1222 23:01:34.442029  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:34.442076  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:34.460995  158374 logs.go:282] 0 containers: []
	W1222 23:01:34.461007  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:34.461056  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:34.479671  158374 logs.go:282] 0 containers: []
	W1222 23:01:34.479688  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:34.479737  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:34.498097  158374 logs.go:282] 0 containers: []
	W1222 23:01:34.498113  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:34.498169  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:34.515924  158374 logs.go:282] 0 containers: []
	W1222 23:01:34.515939  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:34.515985  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:34.534433  158374 logs.go:282] 0 containers: []
	W1222 23:01:34.534447  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:34.534455  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:34.534466  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:34.589495  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:34.581716   29354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:34.582244   29354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:34.583869   29354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:34.584305   29354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:34.586383   29354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:34.581716   29354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:34.582244   29354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:34.583869   29354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:34.584305   29354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:34.586383   29354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:34.589505  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:34.589515  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:34.619333  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:34.619352  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:34.647369  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:34.647385  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:34.692228  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:34.692249  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:37.207836  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:37.218829  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:37.237894  158374 logs.go:282] 0 containers: []
	W1222 23:01:37.237908  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:37.237953  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:37.256000  158374 logs.go:282] 0 containers: []
	W1222 23:01:37.256012  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:37.256054  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:37.275097  158374 logs.go:282] 0 containers: []
	W1222 23:01:37.275112  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:37.275161  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:37.294044  158374 logs.go:282] 0 containers: []
	W1222 23:01:37.294059  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:37.294099  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:37.312979  158374 logs.go:282] 0 containers: []
	W1222 23:01:37.312995  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:37.313041  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:37.331662  158374 logs.go:282] 0 containers: []
	W1222 23:01:37.331675  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:37.331718  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:37.350197  158374 logs.go:282] 0 containers: []
	W1222 23:01:37.350212  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:37.350223  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:37.350233  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:37.397328  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:37.397345  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:37.412835  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:37.412852  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:37.468475  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:37.461747   29517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:37.462295   29517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:37.463832   29517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:37.464240   29517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:37.465745   29517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:37.461747   29517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:37.462295   29517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:37.463832   29517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:37.464240   29517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:37.465745   29517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:37.468487  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:37.468498  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:37.497915  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:37.497934  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:40.027516  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:40.039728  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:40.058705  158374 logs.go:282] 0 containers: []
	W1222 23:01:40.058719  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:40.058762  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:40.076167  158374 logs.go:282] 0 containers: []
	W1222 23:01:40.076184  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:40.076231  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:40.095983  158374 logs.go:282] 0 containers: []
	W1222 23:01:40.095996  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:40.096037  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:40.114657  158374 logs.go:282] 0 containers: []
	W1222 23:01:40.114670  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:40.114717  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:40.133015  158374 logs.go:282] 0 containers: []
	W1222 23:01:40.133028  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:40.133070  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:40.152127  158374 logs.go:282] 0 containers: []
	W1222 23:01:40.152140  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:40.152187  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:40.169569  158374 logs.go:282] 0 containers: []
	W1222 23:01:40.169583  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:40.169622  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:40.169636  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:40.184978  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:40.184992  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:40.238860  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:40.231964   29674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:40.232541   29674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:40.234087   29674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:40.234491   29674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:40.236049   29674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:40.231964   29674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:40.232541   29674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:40.234087   29674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:40.234491   29674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:40.236049   29674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:40.238870  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:40.238879  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:40.268146  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:40.268165  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:40.295931  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:40.295946  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:42.843708  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:42.854816  158374 kubeadm.go:602] duration metric: took 4m1.499731906s to restartPrimaryControlPlane
	W1222 23:01:42.854901  158374 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1222 23:01:42.854978  158374 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1222 23:01:43.257733  158374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 23:01:43.270528  158374 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 23:01:43.278400  158374 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 23:01:43.278439  158374 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 23:01:43.285901  158374 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 23:01:43.285910  158374 kubeadm.go:158] found existing configuration files:
	
	I1222 23:01:43.285947  158374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1222 23:01:43.293882  158374 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 23:01:43.293919  158374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 23:01:43.300825  158374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1222 23:01:43.307941  158374 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 23:01:43.307983  158374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 23:01:43.314784  158374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1222 23:01:43.321699  158374 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 23:01:43.321728  158374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 23:01:43.328397  158374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1222 23:01:43.335272  158374 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 23:01:43.335301  158374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 23:01:43.341949  158374 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 23:01:43.376034  158374 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 23:01:43.376102  158374 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 23:01:43.445165  158374 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 23:01:43.445236  158374 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1222 23:01:43.445264  158374 kubeadm.go:319] OS: Linux
	I1222 23:01:43.445301  158374 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 23:01:43.445350  158374 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 23:01:43.445392  158374 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 23:01:43.445455  158374 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 23:01:43.445494  158374 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 23:01:43.445551  158374 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 23:01:43.445588  158374 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 23:01:43.445673  158374 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 23:01:43.445736  158374 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 23:01:43.500219  158374 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 23:01:43.500396  158374 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 23:01:43.500507  158374 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 23:01:43.513368  158374 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 23:01:43.515533  158374 out.go:252]   - Generating certificates and keys ...
	I1222 23:01:43.515634  158374 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 23:01:43.515681  158374 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 23:01:43.515765  158374 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1222 23:01:43.515820  158374 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1222 23:01:43.515882  158374 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1222 23:01:43.515924  158374 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1222 23:01:43.515975  158374 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1222 23:01:43.516024  158374 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1222 23:01:43.516083  158374 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1222 23:01:43.516151  158374 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1222 23:01:43.516181  158374 kubeadm.go:319] [certs] Using the existing "sa" key
	I1222 23:01:43.516264  158374 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 23:01:43.648060  158374 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 23:01:43.775539  158374 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 23:01:43.806099  158374 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 23:01:43.912171  158374 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 23:01:44.004004  158374 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 23:01:44.004296  158374 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 23:01:44.006366  158374 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 23:01:44.008239  158374 out.go:252]   - Booting up control plane ...
	I1222 23:01:44.008308  158374 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 23:01:44.008365  158374 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 23:01:44.009041  158374 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 23:01:44.026974  158374 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 23:01:44.027088  158374 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 23:01:44.033700  158374 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 23:01:44.033947  158374 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 23:01:44.034017  158374 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 23:01:44.136722  158374 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 23:01:44.136861  158374 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 23:05:44.137086  158374 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000408574s
	I1222 23:05:44.137129  158374 kubeadm.go:319] 
	I1222 23:05:44.137190  158374 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 23:05:44.137216  158374 kubeadm.go:319] 	- The kubelet is not running
	I1222 23:05:44.137303  158374 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 23:05:44.137309  158374 kubeadm.go:319] 
	I1222 23:05:44.137438  158374 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 23:05:44.137492  158374 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 23:05:44.137528  158374 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 23:05:44.137531  158374 kubeadm.go:319] 
	I1222 23:05:44.140373  158374 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1222 23:05:44.140752  158374 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 23:05:44.140849  158374 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 23:05:44.141147  158374 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1222 23:05:44.141156  158374 kubeadm.go:319] 
	I1222 23:05:44.141230  158374 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1222 23:05:44.141360  158374 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000408574s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1222 23:05:44.141451  158374 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1222 23:05:44.555201  158374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 23:05:44.567871  158374 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 23:05:44.567915  158374 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 23:05:44.575883  158374 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 23:05:44.575897  158374 kubeadm.go:158] found existing configuration files:
	
	I1222 23:05:44.575941  158374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1222 23:05:44.583486  158374 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 23:05:44.583527  158374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 23:05:44.590649  158374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1222 23:05:44.597769  158374 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 23:05:44.597806  158374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 23:05:44.604798  158374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1222 23:05:44.611986  158374 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 23:05:44.612034  158374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 23:05:44.619193  158374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1222 23:05:44.626515  158374 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 23:05:44.626555  158374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 23:05:44.633629  158374 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 23:05:44.735033  158374 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1222 23:05:44.735554  158374 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 23:05:44.792296  158374 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 23:09:45.398895  158374 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1222 23:09:45.398970  158374 kubeadm.go:319] 
	I1222 23:09:45.399090  158374 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1222 23:09:45.401586  158374 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 23:09:45.401671  158374 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 23:09:45.401745  158374 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 23:09:45.401791  158374 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1222 23:09:45.401820  158374 kubeadm.go:319] OS: Linux
	I1222 23:09:45.401885  158374 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 23:09:45.401955  158374 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 23:09:45.402023  158374 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 23:09:45.402088  158374 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 23:09:45.402152  158374 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 23:09:45.402201  158374 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 23:09:45.402235  158374 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 23:09:45.402274  158374 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 23:09:45.402309  158374 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 23:09:45.402367  158374 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 23:09:45.402449  158374 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 23:09:45.402536  158374 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 23:09:45.402585  158374 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 23:09:45.404239  158374 out.go:252]   - Generating certificates and keys ...
	I1222 23:09:45.404310  158374 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 23:09:45.404360  158374 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 23:09:45.404421  158374 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1222 23:09:45.404472  158374 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1222 23:09:45.404530  158374 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1222 23:09:45.404569  158374 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1222 23:09:45.404650  158374 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1222 23:09:45.404705  158374 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1222 23:09:45.404761  158374 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1222 23:09:45.404827  158374 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1222 23:09:45.404867  158374 kubeadm.go:319] [certs] Using the existing "sa" key
	I1222 23:09:45.404910  158374 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 23:09:45.404948  158374 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 23:09:45.404989  158374 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 23:09:45.405029  158374 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 23:09:45.405075  158374 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 23:09:45.405115  158374 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 23:09:45.405181  158374 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 23:09:45.405240  158374 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 23:09:45.406503  158374 out.go:252]   - Booting up control plane ...
	I1222 23:09:45.406585  158374 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 23:09:45.406677  158374 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 23:09:45.406738  158374 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 23:09:45.406832  158374 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 23:09:45.406905  158374 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 23:09:45.406993  158374 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 23:09:45.407062  158374 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 23:09:45.407092  158374 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 23:09:45.407211  158374 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 23:09:45.407300  158374 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 23:09:45.407348  158374 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000888774s
	I1222 23:09:45.407350  158374 kubeadm.go:319] 
	I1222 23:09:45.407409  158374 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 23:09:45.407435  158374 kubeadm.go:319] 	- The kubelet is not running
	I1222 23:09:45.407521  158374 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 23:09:45.407524  158374 kubeadm.go:319] 
	I1222 23:09:45.407628  158374 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 23:09:45.407652  158374 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 23:09:45.407675  158374 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 23:09:45.407697  158374 kubeadm.go:319] 
	I1222 23:09:45.407753  158374 kubeadm.go:403] duration metric: took 12m4.079935698s to StartCluster
	I1222 23:09:45.407873  158374 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 23:09:45.407938  158374 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 23:09:45.445003  158374 cri.go:96] found id: ""
	I1222 23:09:45.445021  158374 logs.go:282] 0 containers: []
	W1222 23:09:45.445027  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:09:45.445038  158374 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 23:09:45.445084  158374 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 23:09:45.470772  158374 cri.go:96] found id: ""
	I1222 23:09:45.470788  158374 logs.go:282] 0 containers: []
	W1222 23:09:45.470794  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:09:45.470799  158374 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 23:09:45.470845  158374 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 23:09:45.495903  158374 cri.go:96] found id: ""
	I1222 23:09:45.495920  158374 logs.go:282] 0 containers: []
	W1222 23:09:45.495927  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:09:45.495933  158374 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 23:09:45.495983  158374 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 23:09:45.523926  158374 cri.go:96] found id: ""
	I1222 23:09:45.523943  158374 logs.go:282] 0 containers: []
	W1222 23:09:45.523952  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:09:45.523960  158374 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 23:09:45.524021  158374 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 23:09:45.551137  158374 cri.go:96] found id: ""
	I1222 23:09:45.551153  158374 logs.go:282] 0 containers: []
	W1222 23:09:45.551164  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:09:45.551171  158374 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 23:09:45.551226  158374 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 23:09:45.576565  158374 cri.go:96] found id: ""
	I1222 23:09:45.576583  158374 logs.go:282] 0 containers: []
	W1222 23:09:45.576611  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:09:45.576621  158374 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 23:09:45.576676  158374 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 23:09:45.601969  158374 cri.go:96] found id: ""
	I1222 23:09:45.601983  158374 logs.go:282] 0 containers: []
	W1222 23:09:45.601991  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:09:45.602003  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:09:45.602018  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:09:45.650169  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:09:45.650193  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:09:45.665853  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:09:45.665870  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:09:45.722796  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:09:45.715772   37272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:09:45.716296   37272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:09:45.717847   37272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:09:45.718263   37272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:09:45.719823   37272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:09:45.715772   37272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:09:45.716296   37272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:09:45.717847   37272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:09:45.718263   37272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:09:45.719823   37272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:09:45.722812  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:09:45.722824  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:09:45.751752  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:09:45.751775  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 23:09:45.780185  158374 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000888774s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1222 23:09:45.780232  158374 out.go:285] * 
	W1222 23:09:45.780327  158374 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000888774s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 23:09:45.780349  158374 out.go:285] * 
	W1222 23:09:45.780644  158374 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 23:09:45.784145  158374 out.go:203] 
	W1222 23:09:45.785152  158374 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000888774s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 23:09:45.785198  158374 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1222 23:09:45.785226  158374 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1222 23:09:45.786470  158374 out.go:203] 
	
	
	==> Docker <==
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.415844375Z" level=info msg="Loading containers: done."
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.427335656Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.427378701Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.427416508Z" level=info msg="Initializing buildkit"
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.445627752Z" level=info msg="Completed buildkit initialization"
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.450569156Z" level=info msg="Daemon has completed initialization"
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.450666768Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.450705582Z" level=info msg="API listen on /run/docker.sock"
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.450724731Z" level=info msg="API listen on [::]:2376"
	Dec 22 22:57:39 functional-384766 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 22 22:57:39 functional-384766 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 22 22:57:39 functional-384766 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 22 22:57:39 functional-384766 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 22 22:57:39 functional-384766 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Start docker client with request timeout 0s"
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Loaded network plugin cni"
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 22 22:57:39 functional-384766 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:09:46.819914   37429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:09:46.820473   37429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:09:46.822027   37429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:09:46.822472   37429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:09:46.824004   37429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff da 9e 7f a3 27 cb 08 06
	[  +0.239045] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6e eb f7 fd 0a 48 08 06
	[  +0.170967] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 16 5a dc 65 fc cc 08 06
	[Dec22 22:37] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 66 cb ee 90 55 2b 08 06
	[  +0.000450] IPv4: martian source 10.244.0.32 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff be 43 50 0c dd 15 08 06
	[  +0.000658] IPv4: martian source 10.244.0.32 from 10.244.0.7, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e 41 3c 76 23 2b 08 06
	[  +1.709294] IPv4: martian source 10.244.0.31 from 10.244.0.26, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff be b6 30 85 5f 4e 08 06
	[  +0.532867] IPv4: martian source 10.244.0.26 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff be 43 50 0c dd 15 08 06
	[Dec22 22:39] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 46 b7 49 09 f9 e0 08 06
	[  +0.006417] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1e e5 c5 4f 67 2b 08 06
	[Dec22 22:40] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 22 2e 10 70 70 25 08 06
	[Dec22 22:41] IPv4: martian source 10.244.0.1 from 10.244.0.6, on dev eth0
	[  +0.000034] ll header: 00000000: ff ff ff ff ff ff ee d7 ae 32 ba c5 08 06
	[Dec22 22:42] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 95 cb 2f 8e 91 08 06
	
	
	==> kernel <==
	 23:09:46 up  2:52,  0 user,  load average: 0.12, 0.11, 0.33
	Linux functional-384766 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 23:09:43 functional-384766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 23:09:43 functional-384766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
	Dec 22 23:09:43 functional-384766 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:09:43 functional-384766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:09:44 functional-384766 kubelet[37141]: E1222 23:09:44.035706   37141 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 23:09:44 functional-384766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 23:09:44 functional-384766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 23:09:44 functional-384766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 22 23:09:44 functional-384766 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:09:44 functional-384766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:09:44 functional-384766 kubelet[37151]: E1222 23:09:44.785101   37151 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 23:09:44 functional-384766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 23:09:44 functional-384766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 23:09:45 functional-384766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 22 23:09:45 functional-384766 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:09:45 functional-384766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:09:45 functional-384766 kubelet[37204]: E1222 23:09:45.541449   37204 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 23:09:45 functional-384766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 23:09:45 functional-384766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 23:09:46 functional-384766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 22 23:09:46 functional-384766 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:09:46 functional-384766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:09:46 functional-384766 kubelet[37315]: E1222 23:09:46.306432   37315 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 23:09:46 functional-384766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 23:09:46 functional-384766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-384766 -n functional-384766
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-384766 -n functional-384766: exit status 2 (302.206881ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-384766" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig (731.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth (1.76s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-384766 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: (dbg) Non-zero exit: kubectl --context functional-384766 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (50.593404ms)

                                                
                                                
-- stdout --
	{
	    "apiVersion": "v1",
	    "items": [],
	    "kind": "List",
	    "metadata": {
	        "resourceVersion": ""
	    }
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:827: failed to get components. args "kubectl --context functional-384766 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-384766
helpers_test.go:244: (dbg) docker inspect functional-384766:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c",
	        "Created": "2025-12-22T22:43:03.818900502Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 134904,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T22:43:03.847527913Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9a87e850a5e640dd3e5f71477885272b970ba271e3722be8bebbe0157f704ffd",
	        "ResolvConfPath": "/var/lib/docker/containers/e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c/hostname",
	        "HostsPath": "/var/lib/docker/containers/e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c/hosts",
	        "LogPath": "/var/lib/docker/containers/e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c/e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c-json.log",
	        "Name": "/functional-384766",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-384766:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-384766",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c",
	                "LowerDir": "/var/lib/docker/overlay2/3e3d10c0ae87018d46767d6a2bb62611a8b9a288f6938e75c60f3cd57119d4bf-init/diff:/var/lib/docker/overlay2/c57dd1a41102d99c4ed6be3c60b871435428bd2cea6a3d8d172f0a67527ba009/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3e3d10c0ae87018d46767d6a2bb62611a8b9a288f6938e75c60f3cd57119d4bf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3e3d10c0ae87018d46767d6a2bb62611a8b9a288f6938e75c60f3cd57119d4bf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3e3d10c0ae87018d46767d6a2bb62611a8b9a288f6938e75c60f3cd57119d4bf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-384766",
	                "Source": "/var/lib/docker/volumes/functional-384766/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-384766",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-384766",
	                "name.minikube.sigs.k8s.io": "functional-384766",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d6f65d275ad1e1cfaea153f23b0c094464e089c27de9a12387045fa2c863e00e",
	            "SandboxKey": "/var/run/docker/netns/d6f65d275ad1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-384766": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1b177601c4f3a252e4feb1553da3a4110e40d5b9ed2bd5de6789f2bc9f8f5c2b",
	                    "EndpointID": "2c787f98c5d836612c102f7592dc2eccfef09327c2a6cadf1319fd6559b5eca8",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "d6:90:04:78:9b:e3",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-384766",
	                        "e126b999cc06"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-384766 -n functional-384766
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-384766 -n functional-384766: exit status 2 (304.075114ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-580825 image ls --format json --alsologtostderr                                                                                        │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image   │ functional-580825 image ls --format short --alsologtostderr                                                                                       │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │                     │
	│ image   │ functional-580825 image ls --format yaml --alsologtostderr                                                                                        │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image   │ functional-580825 image ls --format table --alsologtostderr                                                                                       │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image   │ functional-580825 image build -t localhost/my-image:functional-580825 testdata/build --alsologtostderr                                            │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image   │ functional-580825 image ls                                                                                                                        │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ delete  │ -p functional-580825                                                                                                                              │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ start   │ -p functional-384766 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-rc.1 │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │                     │
	│ start   │ -p functional-384766 --alsologtostderr -v=8                                                                                                       │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 22:51 UTC │                     │
	│ cache   │ functional-384766 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 22:57 UTC │ 22 Dec 25 22:57 UTC │
	│ cache   │ functional-384766 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 22:57 UTC │ 22 Dec 25 22:57 UTC │
	│ cache   │ functional-384766 cache add registry.k8s.io/pause:latest                                                                                          │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 22:57 UTC │ 22 Dec 25 22:57 UTC │
	│ cache   │ functional-384766 cache add minikube-local-cache-test:functional-384766                                                                           │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 22:57 UTC │ 22 Dec 25 22:57 UTC │
	│ cache   │ functional-384766 cache delete minikube-local-cache-test:functional-384766                                                                        │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 22:57 UTC │ 22 Dec 25 22:57 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 22 Dec 25 22:57 UTC │ 22 Dec 25 22:57 UTC │
	│ cache   │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 22 Dec 25 22:57 UTC │ 22 Dec 25 22:57 UTC │
	│ ssh     │ functional-384766 ssh sudo crictl images                                                                                                          │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 22:57 UTC │ 22 Dec 25 22:57 UTC │
	│ ssh     │ functional-384766 ssh sudo docker rmi registry.k8s.io/pause:latest                                                                                │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 22:57 UTC │ 22 Dec 25 22:57 UTC │
	│ ssh     │ functional-384766 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 22:57 UTC │                     │
	│ cache   │ functional-384766 cache reload                                                                                                                    │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 22:57 UTC │ 22 Dec 25 22:57 UTC │
	│ ssh     │ functional-384766 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 22:57 UTC │ 22 Dec 25 22:57 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 22 Dec 25 22:57 UTC │ 22 Dec 25 22:57 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 22 Dec 25 22:57 UTC │ 22 Dec 25 22:57 UTC │
	│ kubectl │ functional-384766 kubectl -- --context functional-384766 get pods                                                                                 │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 22:57 UTC │                     │
	│ start   │ -p functional-384766 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                          │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 22:57 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 22:57:36
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 22:57:36.254392  158374 out.go:360] Setting OutFile to fd 1 ...
	I1222 22:57:36.254700  158374 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 22:57:36.254705  158374 out.go:374] Setting ErrFile to fd 2...
	I1222 22:57:36.254708  158374 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 22:57:36.254883  158374 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-72233/.minikube/bin
	I1222 22:57:36.255420  158374 out.go:368] Setting JSON to false
	I1222 22:57:36.256374  158374 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":9596,"bootTime":1766434660,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1222 22:57:36.256458  158374 start.go:143] virtualization: kvm guest
	I1222 22:57:36.258393  158374 out.go:179] * [functional-384766] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1222 22:57:36.259562  158374 out.go:179]   - MINIKUBE_LOCATION=22301
	I1222 22:57:36.259584  158374 notify.go:221] Checking for updates...
	I1222 22:57:36.261710  158374 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 22:57:36.262944  158374 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1222 22:57:36.264212  158374 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-72233/.minikube
	I1222 22:57:36.265355  158374 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1222 22:57:36.266271  158374 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 22:57:36.267661  158374 config.go:182] Loaded profile config "functional-384766": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1222 22:57:36.267820  158374 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 22:57:36.296187  158374 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1222 22:57:36.296285  158374 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 22:57:36.350829  158374 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:57 SystemTime:2025-12-22 22:57:36.341376778 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1222 22:57:36.350930  158374 docker.go:319] overlay module found
	I1222 22:57:36.352570  158374 out.go:179] * Using the docker driver based on existing profile
	I1222 22:57:36.353588  158374 start.go:309] selected driver: docker
	I1222 22:57:36.353611  158374 start.go:928] validating driver "docker" against &{Name:functional-384766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-384766 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 22:57:36.353719  158374 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 22:57:36.353830  158374 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 22:57:36.406492  158374 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:57 SystemTime:2025-12-22 22:57:36.397760538 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1222 22:57:36.407140  158374 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1222 22:57:36.407175  158374 cni.go:84] Creating CNI manager for ""
	I1222 22:57:36.407232  158374 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1222 22:57:36.407286  158374 start.go:353] cluster config:
	{Name:functional-384766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-384766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 22:57:36.408996  158374 out.go:179] * Starting "functional-384766" primary control-plane node in "functional-384766" cluster
	I1222 22:57:36.410078  158374 cache.go:134] Beginning downloading kic base image for docker with docker
	I1222 22:57:36.411119  158374 out.go:179] * Pulling base image v0.0.48-1766394456-22288 ...
	I1222 22:57:36.412129  158374 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1222 22:57:36.412159  158374 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4
	I1222 22:57:36.412174  158374 cache.go:65] Caching tarball of preloaded images
	I1222 22:57:36.412242  158374 preload.go:251] Found /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1222 22:57:36.412248  158374 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on docker
	I1222 22:57:36.412244  158374 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local docker daemon
	I1222 22:57:36.412341  158374 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/config.json ...
	I1222 22:57:36.431941  158374 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local docker daemon, skipping pull
	I1222 22:57:36.431955  158374 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 exists in daemon, skipping load
	I1222 22:57:36.431969  158374 cache.go:243] Successfully downloaded all kic artifacts
	I1222 22:57:36.431996  158374 start.go:360] acquireMachinesLock for functional-384766: {Name:mk956fe60c71d3d96aa218ecf73d6e39f6ab1bf3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 22:57:36.432059  158374 start.go:364] duration metric: took 40.356µs to acquireMachinesLock for "functional-384766"
	I1222 22:57:36.432072  158374 start.go:96] Skipping create...Using existing machine configuration
	I1222 22:57:36.432076  158374 fix.go:54] fixHost starting: 
	I1222 22:57:36.432265  158374 cli_runner.go:164] Run: docker container inspect functional-384766 --format={{.State.Status}}
	I1222 22:57:36.449079  158374 fix.go:112] recreateIfNeeded on functional-384766: state=Running err=<nil>
	W1222 22:57:36.449100  158374 fix.go:138] unexpected machine state, will restart: <nil>
	I1222 22:57:36.450671  158374 out.go:252] * Updating the running docker "functional-384766" container ...
	I1222 22:57:36.450705  158374 machine.go:94] provisionDockerMachine start ...
	I1222 22:57:36.450764  158374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:57:36.467607  158374 main.go:144] libmachine: Using SSH client type: native
	I1222 22:57:36.467835  158374 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:57:36.467841  158374 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 22:57:36.608433  158374 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-384766
	
	I1222 22:57:36.608449  158374 ubuntu.go:182] provisioning hostname "functional-384766"
	I1222 22:57:36.608504  158374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:57:36.626300  158374 main.go:144] libmachine: Using SSH client type: native
	I1222 22:57:36.626509  158374 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:57:36.626516  158374 main.go:144] libmachine: About to run SSH command:
	sudo hostname functional-384766 && echo "functional-384766" | sudo tee /etc/hostname
	I1222 22:57:36.777413  158374 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-384766
	
	I1222 22:57:36.777486  158374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:57:36.795160  158374 main.go:144] libmachine: Using SSH client type: native
	I1222 22:57:36.795380  158374 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:57:36.795396  158374 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-384766' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-384766/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-384766' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 22:57:36.935922  158374 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 22:57:36.935942  158374 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22301-72233/.minikube CaCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22301-72233/.minikube}
	I1222 22:57:36.935957  158374 ubuntu.go:190] setting up certificates
	I1222 22:57:36.935965  158374 provision.go:84] configureAuth start
	I1222 22:57:36.936023  158374 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-384766
	I1222 22:57:36.954219  158374 provision.go:143] copyHostCerts
	I1222 22:57:36.954277  158374 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem, removing ...
	I1222 22:57:36.954291  158374 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem
	I1222 22:57:36.954367  158374 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem (1082 bytes)
	I1222 22:57:36.954466  158374 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem, removing ...
	I1222 22:57:36.954469  158374 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem
	I1222 22:57:36.954495  158374 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem (1123 bytes)
	I1222 22:57:36.954569  158374 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem, removing ...
	I1222 22:57:36.954572  158374 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem
	I1222 22:57:36.954631  158374 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem (1679 bytes)
	I1222 22:57:36.954687  158374 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem org=jenkins.functional-384766 san=[127.0.0.1 192.168.49.2 functional-384766 localhost minikube]
	I1222 22:57:36.981147  158374 provision.go:177] copyRemoteCerts
	I1222 22:57:36.981202  158374 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 22:57:36.981239  158374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:57:37.000716  158374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
	I1222 22:57:37.101499  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 22:57:37.118740  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 22:57:37.135018  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1222 22:57:37.151214  158374 provision.go:87] duration metric: took 215.234679ms to configureAuth
	I1222 22:57:37.151234  158374 ubuntu.go:206] setting minikube options for container-runtime
	I1222 22:57:37.151390  158374 config.go:182] Loaded profile config "functional-384766": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1222 22:57:37.151430  158374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:57:37.168491  158374 main.go:144] libmachine: Using SSH client type: native
	I1222 22:57:37.168730  158374 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:57:37.168737  158374 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1222 22:57:37.310361  158374 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1222 22:57:37.310376  158374 ubuntu.go:71] root file system type: overlay
	I1222 22:57:37.310489  158374 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1222 22:57:37.310547  158374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:57:37.329095  158374 main.go:144] libmachine: Using SSH client type: native
	I1222 22:57:37.329306  158374 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:57:37.329369  158374 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1222 22:57:37.478917  158374 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1222 22:57:37.478994  158374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:57:37.496454  158374 main.go:144] libmachine: Using SSH client type: native
	I1222 22:57:37.496687  158374 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:57:37.496699  158374 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1222 22:57:37.641628  158374 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 22:57:37.641654  158374 machine.go:97] duration metric: took 1.190941144s to provisionDockerMachine
	I1222 22:57:37.641665  158374 start.go:293] postStartSetup for "functional-384766" (driver="docker")
	I1222 22:57:37.641676  158374 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 22:57:37.641727  158374 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 22:57:37.641757  158374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:57:37.659069  158374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
	I1222 22:57:37.759899  158374 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 22:57:37.763912  158374 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 22:57:37.763929  158374 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 22:57:37.763939  158374 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-72233/.minikube/addons for local assets ...
	I1222 22:57:37.763985  158374 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-72233/.minikube/files for local assets ...
	I1222 22:57:37.764057  158374 filesync.go:149] local asset: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem -> 758032.pem in /etc/ssl/certs
	I1222 22:57:37.764125  158374 filesync.go:149] local asset: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/test/nested/copy/75803/hosts -> hosts in /etc/test/nested/copy/75803
	I1222 22:57:37.764158  158374 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/75803
	I1222 22:57:37.772288  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem --> /etc/ssl/certs/758032.pem (1708 bytes)
	I1222 22:57:37.789657  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/test/nested/copy/75803/hosts --> /etc/test/nested/copy/75803/hosts (40 bytes)
	I1222 22:57:37.805946  158374 start.go:296] duration metric: took 164.267669ms for postStartSetup
	I1222 22:57:37.806019  158374 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 22:57:37.806054  158374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:57:37.823397  158374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
	I1222 22:57:37.920964  158374 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 22:57:37.925567  158374 fix.go:56] duration metric: took 1.493483875s for fixHost
	I1222 22:57:37.925585  158374 start.go:83] releasing machines lock for "functional-384766", held for 1.493518865s
	I1222 22:57:37.925676  158374 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-384766
	I1222 22:57:37.944340  158374 ssh_runner.go:195] Run: cat /version.json
	I1222 22:57:37.944379  158374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:57:37.944410  158374 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 22:57:37.944475  158374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:57:37.962270  158374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
	I1222 22:57:37.963480  158374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
	I1222 22:57:38.111745  158374 ssh_runner.go:195] Run: systemctl --version
	I1222 22:57:38.118245  158374 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 22:57:38.122628  158374 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 22:57:38.122679  158374 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 22:57:38.130349  158374 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1222 22:57:38.130362  158374 start.go:496] detecting cgroup driver to use...
	I1222 22:57:38.130390  158374 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 22:57:38.130482  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 22:57:38.143844  158374 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1222 22:57:38.152204  158374 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1222 22:57:38.160833  158374 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1222 22:57:38.160878  158374 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1222 22:57:38.168944  158374 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 22:57:38.176827  158374 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1222 22:57:38.185035  158374 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 22:57:38.193068  158374 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 22:57:38.200733  158374 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1222 22:57:38.208877  158374 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1222 22:57:38.217062  158374 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1222 22:57:38.225212  158374 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 22:57:38.231954  158374 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 22:57:38.238562  158374 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 22:57:38.319900  158374 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1222 22:57:38.394735  158374 start.go:496] detecting cgroup driver to use...
	I1222 22:57:38.394777  158374 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 22:57:38.394829  158374 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1222 22:57:38.408181  158374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 22:57:38.420724  158374 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1222 22:57:38.437862  158374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 22:57:38.450387  158374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1222 22:57:38.462197  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 22:57:38.475419  158374 ssh_runner.go:195] Run: which cri-dockerd
	I1222 22:57:38.478805  158374 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1222 22:57:38.485878  158374 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1222 22:57:38.497638  158374 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1222 22:57:38.579501  158374 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1222 22:57:38.662636  158374 docker.go:578] configuring docker to use "cgroupfs" as cgroup driver...
	I1222 22:57:38.662750  158374 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1222 22:57:38.675412  158374 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1222 22:57:38.686668  158374 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 22:57:38.767093  158374 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1222 22:57:39.452892  158374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 22:57:39.465276  158374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1222 22:57:39.477001  158374 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1222 22:57:39.491722  158374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1222 22:57:39.503501  158374 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1222 22:57:39.584904  158374 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1222 22:57:39.672762  158374 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 22:57:39.748726  158374 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1222 22:57:39.768653  158374 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1222 22:57:39.780104  158374 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 22:57:39.862790  158374 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1222 22:57:39.934384  158374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1222 22:57:39.948030  158374 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1222 22:57:39.948084  158374 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1222 22:57:39.952002  158374 start.go:564] Will wait 60s for crictl version
	I1222 22:57:39.952049  158374 ssh_runner.go:195] Run: which crictl
	I1222 22:57:39.955397  158374 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 22:57:39.979213  158374 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1222 22:57:39.979270  158374 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1222 22:57:40.004367  158374 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1222 22:57:40.031792  158374 out.go:252] * Preparing Kubernetes v1.35.0-rc.1 on Docker 29.1.3 ...
	I1222 22:57:40.031863  158374 cli_runner.go:164] Run: docker network inspect functional-384766 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 22:57:40.047933  158374 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1222 22:57:40.053698  158374 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1222 22:57:40.054726  158374 kubeadm.go:884] updating cluster {Name:functional-384766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-384766 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1222 22:57:40.054846  158374 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1222 22:57:40.054890  158374 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1222 22:57:40.076020  158374 docker.go:694] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-384766
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1222 22:57:40.076046  158374 docker.go:624] Images already preloaded, skipping extraction
	I1222 22:57:40.076111  158374 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1222 22:57:40.096347  158374 docker.go:694] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-384766
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1222 22:57:40.096366  158374 cache_images.go:86] Images are preloaded, skipping loading
	I1222 22:57:40.096374  158374 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 docker true true} ...
	I1222 22:57:40.096468  158374 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-384766 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-384766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 22:57:40.096517  158374 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1222 22:57:40.147179  158374 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1222 22:57:40.147206  158374 cni.go:84] Creating CNI manager for ""
	I1222 22:57:40.147226  158374 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1222 22:57:40.147236  158374 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 22:57:40.147256  158374 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-384766 NodeName:functional-384766 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 22:57:40.147375  158374 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-384766"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 22:57:40.147436  158374 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 22:57:40.155394  158374 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 22:57:40.155439  158374 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 22:57:40.163036  158374 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1222 22:57:40.175169  158374 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 22:57:40.187093  158374 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2073 bytes)
	I1222 22:57:40.198818  158374 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1222 22:57:40.202222  158374 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 22:57:40.283126  158374 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 22:57:40.747886  158374 certs.go:69] Setting up /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766 for IP: 192.168.49.2
	I1222 22:57:40.747899  158374 certs.go:195] generating shared ca certs ...
	I1222 22:57:40.747914  158374 certs.go:227] acquiring lock for ca certs: {Name:mk952cc8302daab7c0050aedd5db4002f6808128 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 22:57:40.748072  158374 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.key
	I1222 22:57:40.748113  158374 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.key
	I1222 22:57:40.748119  158374 certs.go:257] generating profile certs ...
	I1222 22:57:40.748199  158374 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.key
	I1222 22:57:40.748236  158374 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/apiserver.key.c9e079a8
	I1222 22:57:40.748278  158374 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/proxy-client.key
	I1222 22:57:40.748397  158374 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803.pem (1338 bytes)
	W1222 22:57:40.748423  158374 certs.go:480] ignoring /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803_empty.pem, impossibly tiny 0 bytes
	I1222 22:57:40.748429  158374 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem (1675 bytes)
	I1222 22:57:40.748451  158374 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem (1082 bytes)
	I1222 22:57:40.748470  158374 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem (1123 bytes)
	I1222 22:57:40.748489  158374 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem (1679 bytes)
	I1222 22:57:40.748525  158374 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem (1708 bytes)
	I1222 22:57:40.749053  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 22:57:40.768237  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 22:57:40.787559  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 22:57:40.804276  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 22:57:40.820613  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 22:57:40.836790  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 22:57:40.852839  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 22:57:40.869050  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1222 22:57:40.885231  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803.pem --> /usr/share/ca-certificates/75803.pem (1338 bytes)
	I1222 22:57:40.901347  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem --> /usr/share/ca-certificates/758032.pem (1708 bytes)
	I1222 22:57:40.917338  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 22:57:40.933332  158374 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 22:57:40.944903  158374 ssh_runner.go:195] Run: openssl version
	I1222 22:57:40.950515  158374 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/758032.pem
	I1222 22:57:40.957071  158374 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/758032.pem /etc/ssl/certs/758032.pem
	I1222 22:57:40.963749  158374 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/758032.pem
	I1222 22:57:40.966999  158374 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 22:42 /usr/share/ca-certificates/758032.pem
	I1222 22:57:40.967032  158374 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/758032.pem
	I1222 22:57:41.000342  158374 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 22:57:41.007579  158374 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 22:57:41.014450  158374 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 22:57:41.021442  158374 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 22:57:41.024853  158374 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 22:33 /usr/share/ca-certificates/minikubeCA.pem
	I1222 22:57:41.024902  158374 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 22:57:41.058138  158374 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 22:57:41.065135  158374 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/75803.pem
	I1222 22:57:41.071858  158374 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/75803.pem /etc/ssl/certs/75803.pem
	I1222 22:57:41.078672  158374 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75803.pem
	I1222 22:57:41.082051  158374 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 22:42 /usr/share/ca-certificates/75803.pem
	I1222 22:57:41.082083  158374 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75803.pem
	I1222 22:57:41.115012  158374 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 22:57:41.122326  158374 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 22:57:41.125872  158374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1222 22:57:41.158840  158374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1222 22:57:41.191689  158374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1222 22:57:41.224669  158374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1222 22:57:41.258802  158374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1222 22:57:41.292531  158374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1222 22:57:41.327828  158374 kubeadm.go:401] StartCluster: {Name:functional-384766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-384766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 22:57:41.327941  158374 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1222 22:57:41.347229  158374 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 22:57:41.355058  158374 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1222 22:57:41.355067  158374 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1222 22:57:41.355102  158374 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1222 22:57:41.362198  158374 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1222 22:57:41.362672  158374 kubeconfig.go:125] found "functional-384766" server: "https://192.168.49.2:8441"
	I1222 22:57:41.363809  158374 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1222 22:57:41.371022  158374 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-22 22:43:13.034628184 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-22 22:57:40.197478933 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1222 22:57:41.371029  158374 kubeadm.go:1161] stopping kube-system containers ...
	I1222 22:57:41.371066  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1222 22:57:41.389715  158374 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1222 22:57:41.415695  158374 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 22:57:41.423304  158374 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec 22 22:47 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 22 22:47 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 22 22:47 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 22 22:47 /etc/kubernetes/scheduler.conf
	
	I1222 22:57:41.423364  158374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1222 22:57:41.430717  158374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1222 22:57:41.437811  158374 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1222 22:57:41.437848  158374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 22:57:41.444879  158374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1222 22:57:41.452191  158374 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1222 22:57:41.452233  158374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 22:57:41.459225  158374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1222 22:57:41.466383  158374 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1222 22:57:41.466418  158374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 22:57:41.473427  158374 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 22:57:41.480724  158374 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1222 22:57:41.518575  158374 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1222 22:57:41.974225  158374 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1222 22:57:42.135961  158374 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1222 22:57:42.183844  158374 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1222 22:57:42.223254  158374 api_server.go:52] waiting for apiserver process to appear ...
	I1222 22:57:42.223318  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:42.723474  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:43.223549  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:43.724244  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:44.223849  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:44.724026  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:45.223499  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:45.723832  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:46.223744  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:46.723529  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:47.224208  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:47.723932  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:48.223584  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:48.724285  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:49.224200  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:49.723863  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:50.223734  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:50.724424  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:51.224429  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:51.724246  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:52.223705  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:52.724265  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:53.223569  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:53.724236  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:54.224306  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:54.724058  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:55.223766  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:55.724443  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:56.224475  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:56.724242  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:57.223801  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:57.723648  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:58.224456  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:58.724111  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:59.224088  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:59.723871  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:00.223787  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:00.723546  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:01.224166  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:01.723833  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:02.224349  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:02.724090  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:03.223629  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:03.724404  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:04.223783  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:04.724330  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:05.223503  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:05.724088  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:06.224003  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:06.724375  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:07.223508  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:07.724225  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:08.224384  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:08.724300  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:09.224073  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:09.723806  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:10.223613  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:10.724450  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:11.224438  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:11.724384  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:12.224307  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:12.723407  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:13.224265  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:13.724010  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:14.223745  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:14.723548  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:15.223894  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:15.723495  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:16.223471  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:16.724428  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:17.224173  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:17.723800  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:18.223536  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:18.724376  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:19.224018  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:19.723833  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:20.223461  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:20.723797  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:21.223581  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:21.723470  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:22.224221  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:22.723735  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:23.223505  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:23.723726  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:24.223516  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:24.724413  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:25.223835  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:25.723691  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:26.223672  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:26.723588  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:27.223568  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:27.723458  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:28.224226  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:28.724079  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:29.223830  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:29.723697  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:30.224456  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:30.724136  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:31.223750  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:31.723578  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:32.223414  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:32.724025  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:33.224291  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:33.724443  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:34.224315  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:34.724019  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:35.223687  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:35.723472  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:36.224212  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:36.724077  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:37.223750  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:37.723438  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:38.223515  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:38.723503  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:39.224337  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:39.723503  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:40.224133  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:40.723853  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:41.223668  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:41.723695  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:42.223527  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:58:42.242498  158374 logs.go:282] 0 containers: []
	W1222 22:58:42.242530  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:58:42.242576  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:58:42.263682  158374 logs.go:282] 0 containers: []
	W1222 22:58:42.263696  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:58:42.263747  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:58:42.284235  158374 logs.go:282] 0 containers: []
	W1222 22:58:42.284250  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:58:42.284330  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:58:42.303204  158374 logs.go:282] 0 containers: []
	W1222 22:58:42.303219  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:58:42.303263  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:58:42.321387  158374 logs.go:282] 0 containers: []
	W1222 22:58:42.321404  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:58:42.321461  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:58:42.340277  158374 logs.go:282] 0 containers: []
	W1222 22:58:42.340290  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:58:42.340333  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:58:42.359009  158374 logs.go:282] 0 containers: []
	W1222 22:58:42.359025  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:58:42.359034  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:58:42.359044  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:58:42.407304  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:58:42.407323  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:58:42.423167  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:58:42.423184  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:58:42.478018  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:58:42.471043   19939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:42.471587   19939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:42.473144   19939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:42.473549   19939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:42.474977   19939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:58:42.471043   19939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:42.471587   19939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:42.473144   19939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:42.473549   19939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:42.474977   19939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:58:42.478032  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:58:42.478050  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:58:42.508140  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:58:42.508159  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:58:45.047948  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:45.058851  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:58:45.078438  158374 logs.go:282] 0 containers: []
	W1222 22:58:45.078457  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:58:45.078506  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:58:45.096664  158374 logs.go:282] 0 containers: []
	W1222 22:58:45.096678  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:58:45.096729  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:58:45.114982  158374 logs.go:282] 0 containers: []
	W1222 22:58:45.114995  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:58:45.115033  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:58:45.132907  158374 logs.go:282] 0 containers: []
	W1222 22:58:45.132920  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:58:45.132960  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:58:45.151352  158374 logs.go:282] 0 containers: []
	W1222 22:58:45.151368  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:58:45.151409  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:58:45.169708  158374 logs.go:282] 0 containers: []
	W1222 22:58:45.169725  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:58:45.169767  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:58:45.187775  158374 logs.go:282] 0 containers: []
	W1222 22:58:45.187790  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:58:45.187802  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:58:45.187814  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:58:45.242776  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:58:45.235974   20088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:45.236524   20088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:45.238023   20088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:45.238438   20088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:45.239937   20088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:58:45.235974   20088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:45.236524   20088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:45.238023   20088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:45.238438   20088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:45.239937   20088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:58:45.242790  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:58:45.242800  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:58:45.273873  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:58:45.273892  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:58:45.303522  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:58:45.303541  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:58:45.351682  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:58:45.351702  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:58:47.869586  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:47.880760  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:58:47.899543  158374 logs.go:282] 0 containers: []
	W1222 22:58:47.899560  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:58:47.899617  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:58:47.917954  158374 logs.go:282] 0 containers: []
	W1222 22:58:47.917970  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:58:47.918017  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:58:47.936207  158374 logs.go:282] 0 containers: []
	W1222 22:58:47.936224  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:58:47.936269  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:58:47.954310  158374 logs.go:282] 0 containers: []
	W1222 22:58:47.954328  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:58:47.954376  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:58:47.971746  158374 logs.go:282] 0 containers: []
	W1222 22:58:47.971762  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:58:47.971806  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:58:47.989993  158374 logs.go:282] 0 containers: []
	W1222 22:58:47.990008  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:58:47.990054  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:58:48.008188  158374 logs.go:282] 0 containers: []
	W1222 22:58:48.008204  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:58:48.008215  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:58:48.008227  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:58:48.056174  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:58:48.056192  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:58:48.071128  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:58:48.071143  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:58:48.124584  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:58:48.117861   20249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:48.118352   20249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:48.119971   20249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:48.120348   20249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:48.121842   20249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:58:48.117861   20249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:48.118352   20249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:48.119971   20249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:48.120348   20249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:48.121842   20249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:58:48.124621  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:58:48.124635  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:58:48.155889  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:58:48.155907  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:58:50.685742  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:50.696961  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:58:50.716371  158374 logs.go:282] 0 containers: []
	W1222 22:58:50.716385  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:58:50.716430  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:58:50.734780  158374 logs.go:282] 0 containers: []
	W1222 22:58:50.734798  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:58:50.734842  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:58:50.753152  158374 logs.go:282] 0 containers: []
	W1222 22:58:50.753169  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:58:50.753213  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:58:50.771281  158374 logs.go:282] 0 containers: []
	W1222 22:58:50.771296  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:58:50.771338  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:58:50.788814  158374 logs.go:282] 0 containers: []
	W1222 22:58:50.788826  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:58:50.788872  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:58:50.806768  158374 logs.go:282] 0 containers: []
	W1222 22:58:50.806781  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:58:50.806837  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:58:50.824539  158374 logs.go:282] 0 containers: []
	W1222 22:58:50.824552  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:58:50.824561  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:58:50.824581  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:58:50.873346  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:58:50.873363  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:58:50.888174  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:58:50.888188  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:58:50.942890  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:58:50.935978   20403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:50.936526   20403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:50.938077   20403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:50.938499   20403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:50.939984   20403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:58:50.935978   20403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:50.936526   20403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:50.938077   20403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:50.938499   20403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:50.939984   20403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:58:50.942904  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:58:50.942915  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:58:50.971205  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:58:50.971223  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:58:53.500770  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:53.512538  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:58:53.535794  158374 logs.go:282] 0 containers: []
	W1222 22:58:53.535812  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:58:53.535872  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:58:53.554667  158374 logs.go:282] 0 containers: []
	W1222 22:58:53.554684  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:58:53.554739  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:58:53.573251  158374 logs.go:282] 0 containers: []
	W1222 22:58:53.573267  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:58:53.573317  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:58:53.591664  158374 logs.go:282] 0 containers: []
	W1222 22:58:53.591686  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:58:53.591739  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:58:53.610128  158374 logs.go:282] 0 containers: []
	W1222 22:58:53.610141  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:58:53.610183  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:58:53.628089  158374 logs.go:282] 0 containers: []
	W1222 22:58:53.628105  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:58:53.628148  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:58:53.645890  158374 logs.go:282] 0 containers: []
	W1222 22:58:53.645908  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:58:53.645919  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:58:53.645932  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:58:53.692043  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:58:53.692062  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:58:53.707092  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:58:53.707107  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:58:53.761308  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:58:53.754291   20560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:53.754841   20560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:53.756416   20560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:53.756807   20560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:53.758250   20560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:58:53.754291   20560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:53.754841   20560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:53.756416   20560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:53.756807   20560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:53.758250   20560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:58:53.761320  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:58:53.761331  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:58:53.789713  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:58:53.789730  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:58:56.318819  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:56.329790  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:58:56.348795  158374 logs.go:282] 0 containers: []
	W1222 22:58:56.348808  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:58:56.348851  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:58:56.366850  158374 logs.go:282] 0 containers: []
	W1222 22:58:56.366866  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:58:56.366932  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:58:56.385468  158374 logs.go:282] 0 containers: []
	W1222 22:58:56.385483  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:58:56.385530  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:58:56.404330  158374 logs.go:282] 0 containers: []
	W1222 22:58:56.404345  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:58:56.404406  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:58:56.422533  158374 logs.go:282] 0 containers: []
	W1222 22:58:56.422549  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:58:56.422631  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:58:56.440667  158374 logs.go:282] 0 containers: []
	W1222 22:58:56.440681  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:58:56.440742  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:58:56.459073  158374 logs.go:282] 0 containers: []
	W1222 22:58:56.459088  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:58:56.459099  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:58:56.459113  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:58:56.506766  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:58:56.506783  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:58:56.523645  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:58:56.523667  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:58:56.580517  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:58:56.573333   20718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:56.573875   20718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:56.575514   20718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:56.576017   20718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:56.577576   20718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:58:56.573333   20718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:56.573875   20718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:56.575514   20718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:56.576017   20718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:56.577576   20718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:58:56.580531  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:58:56.580543  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:58:56.610571  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:58:56.610588  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:58:59.140001  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:59.151059  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:58:59.169787  158374 logs.go:282] 0 containers: []
	W1222 22:58:59.169801  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:58:59.169840  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:58:59.187907  158374 logs.go:282] 0 containers: []
	W1222 22:58:59.187919  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:58:59.187959  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:58:59.206755  158374 logs.go:282] 0 containers: []
	W1222 22:58:59.206770  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:58:59.206811  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:58:59.225123  158374 logs.go:282] 0 containers: []
	W1222 22:58:59.225139  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:58:59.225179  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:58:59.243400  158374 logs.go:282] 0 containers: []
	W1222 22:58:59.243414  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:58:59.243453  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:58:59.261475  158374 logs.go:282] 0 containers: []
	W1222 22:58:59.261492  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:58:59.261556  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:58:59.279819  158374 logs.go:282] 0 containers: []
	W1222 22:58:59.279834  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:58:59.279844  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:58:59.279855  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:58:59.295024  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:58:59.295046  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:58:59.349874  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:58:59.343012   20862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:59.343571   20862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:59.345099   20862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:59.345548   20862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:59.347033   20862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:58:59.343012   20862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:59.343571   20862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:59.345099   20862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:59.345548   20862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:59.347033   20862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:58:59.349889  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:58:59.349902  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:58:59.381356  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:58:59.381378  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:58:59.409144  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:58:59.409160  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:01.955340  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:01.966166  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:01.985075  158374 logs.go:282] 0 containers: []
	W1222 22:59:01.985088  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:01.985135  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:02.003681  158374 logs.go:282] 0 containers: []
	W1222 22:59:02.003695  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:02.003748  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:02.022064  158374 logs.go:282] 0 containers: []
	W1222 22:59:02.022081  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:02.022127  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:02.040290  158374 logs.go:282] 0 containers: []
	W1222 22:59:02.040302  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:02.040346  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:02.058109  158374 logs.go:282] 0 containers: []
	W1222 22:59:02.058123  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:02.058167  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:02.076398  158374 logs.go:282] 0 containers: []
	W1222 22:59:02.076415  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:02.076469  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:02.095264  158374 logs.go:282] 0 containers: []
	W1222 22:59:02.095326  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:02.095338  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:02.095350  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:02.140655  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:02.140678  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:02.156234  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:02.156248  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:02.212079  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:02.205182   21023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:02.205716   21023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:02.207280   21023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:02.207782   21023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:02.209314   21023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:02.205182   21023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:02.205716   21023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:02.207280   21023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:02.207782   21023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:02.209314   21023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:02.212094  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:02.212106  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:02.241399  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:02.241415  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:04.771709  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:04.783605  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:04.802797  158374 logs.go:282] 0 containers: []
	W1222 22:59:04.802811  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:04.802907  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:04.822172  158374 logs.go:282] 0 containers: []
	W1222 22:59:04.822187  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:04.822232  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:04.840265  158374 logs.go:282] 0 containers: []
	W1222 22:59:04.840280  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:04.840320  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:04.858270  158374 logs.go:282] 0 containers: []
	W1222 22:59:04.858287  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:04.858329  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:04.876142  158374 logs.go:282] 0 containers: []
	W1222 22:59:04.876158  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:04.876204  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:04.894156  158374 logs.go:282] 0 containers: []
	W1222 22:59:04.894169  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:04.894209  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:04.912355  158374 logs.go:282] 0 containers: []
	W1222 22:59:04.912373  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:04.912383  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:04.912393  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:04.940312  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:04.940332  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:04.985353  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:04.985370  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:05.000242  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:05.000264  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:05.054276  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:05.047325   21197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:05.047883   21197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:05.049424   21197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:05.049887   21197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:05.051401   21197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:05.047325   21197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:05.047883   21197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:05.049424   21197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:05.049887   21197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:05.051401   21197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:05.054288  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:05.054298  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:07.583327  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:07.594487  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:07.614008  158374 logs.go:282] 0 containers: []
	W1222 22:59:07.614023  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:07.614073  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:07.633345  158374 logs.go:282] 0 containers: []
	W1222 22:59:07.633364  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:07.633410  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:07.651888  158374 logs.go:282] 0 containers: []
	W1222 22:59:07.651900  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:07.651939  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:07.670373  158374 logs.go:282] 0 containers: []
	W1222 22:59:07.670389  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:07.670431  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:07.687752  158374 logs.go:282] 0 containers: []
	W1222 22:59:07.687772  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:07.687819  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:07.707382  158374 logs.go:282] 0 containers: []
	W1222 22:59:07.707397  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:07.707449  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:07.725692  158374 logs.go:282] 0 containers: []
	W1222 22:59:07.725705  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:07.725714  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:07.725724  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:07.741276  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:07.741290  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:07.807688  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:07.800646   21327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:07.801223   21327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:07.802769   21327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:07.803177   21327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:07.804708   21327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:07.800646   21327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:07.801223   21327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:07.802769   21327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:07.803177   21327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:07.804708   21327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:07.807698  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:07.807708  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:07.838193  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:07.838211  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:07.867411  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:07.867429  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:10.417278  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:10.428172  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:10.447192  158374 logs.go:282] 0 containers: []
	W1222 22:59:10.447210  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:10.447268  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:10.465742  158374 logs.go:282] 0 containers: []
	W1222 22:59:10.465755  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:10.465802  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:10.483930  158374 logs.go:282] 0 containers: []
	W1222 22:59:10.483943  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:10.483982  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:10.502550  158374 logs.go:282] 0 containers: []
	W1222 22:59:10.502564  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:10.502631  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:10.521157  158374 logs.go:282] 0 containers: []
	W1222 22:59:10.521170  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:10.521217  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:10.539930  158374 logs.go:282] 0 containers: []
	W1222 22:59:10.539944  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:10.539988  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:10.557819  158374 logs.go:282] 0 containers: []
	W1222 22:59:10.557836  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:10.557847  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:10.557860  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:10.586007  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:10.586023  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:10.630906  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:10.630928  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:10.645969  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:10.645986  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:10.700369  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:10.693704   21501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:10.694182   21501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:10.695738   21501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:10.696170   21501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:10.697667   21501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:10.693704   21501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:10.694182   21501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:10.695738   21501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:10.696170   21501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:10.697667   21501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:10.700383  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:10.700396  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:13.229999  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:13.241245  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:13.261612  158374 logs.go:282] 0 containers: []
	W1222 22:59:13.261629  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:13.261685  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:13.279825  158374 logs.go:282] 0 containers: []
	W1222 22:59:13.279843  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:13.279893  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:13.297933  158374 logs.go:282] 0 containers: []
	W1222 22:59:13.297951  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:13.298008  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:13.316218  158374 logs.go:282] 0 containers: []
	W1222 22:59:13.316235  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:13.316315  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:13.334375  158374 logs.go:282] 0 containers: []
	W1222 22:59:13.334389  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:13.334444  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:13.353104  158374 logs.go:282] 0 containers: []
	W1222 22:59:13.353123  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:13.353179  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:13.371772  158374 logs.go:282] 0 containers: []
	W1222 22:59:13.371791  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:13.371802  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:13.371816  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:13.419777  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:13.419800  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:13.435473  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:13.435489  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:13.490824  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:13.484010   21640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:13.484560   21640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:13.486103   21640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:13.486541   21640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:13.488011   21640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:13.484010   21640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:13.484560   21640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:13.486103   21640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:13.486541   21640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:13.488011   21640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:13.490835  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:13.490848  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:13.519782  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:13.519800  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:16.052715  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:16.064085  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:16.083176  158374 logs.go:282] 0 containers: []
	W1222 22:59:16.083195  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:16.083255  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:16.102468  158374 logs.go:282] 0 containers: []
	W1222 22:59:16.102485  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:16.102532  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:16.121564  158374 logs.go:282] 0 containers: []
	W1222 22:59:16.121580  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:16.121654  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:16.140862  158374 logs.go:282] 0 containers: []
	W1222 22:59:16.140879  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:16.140928  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:16.159281  158374 logs.go:282] 0 containers: []
	W1222 22:59:16.159295  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:16.159347  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:16.177569  158374 logs.go:282] 0 containers: []
	W1222 22:59:16.177606  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:16.177659  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:16.196491  158374 logs.go:282] 0 containers: []
	W1222 22:59:16.196507  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:16.196516  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:16.196526  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:16.225379  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:16.225399  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:16.270312  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:16.270332  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:16.285737  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:16.285752  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:16.339892  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:16.332955   21812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:16.333507   21812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:16.335037   21812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:16.335481   21812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:16.337003   21812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:16.332955   21812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:16.333507   21812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:16.335037   21812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:16.335481   21812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:16.337003   21812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:16.339906  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:16.339924  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:18.870402  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:18.881333  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:18.899917  158374 logs.go:282] 0 containers: []
	W1222 22:59:18.899940  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:18.899987  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:18.918652  158374 logs.go:282] 0 containers: []
	W1222 22:59:18.918666  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:18.918711  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:18.936854  158374 logs.go:282] 0 containers: []
	W1222 22:59:18.936871  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:18.936930  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:18.956082  158374 logs.go:282] 0 containers: []
	W1222 22:59:18.956099  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:18.956148  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:18.974672  158374 logs.go:282] 0 containers: []
	W1222 22:59:18.974690  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:18.974747  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:18.993264  158374 logs.go:282] 0 containers: []
	W1222 22:59:18.993281  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:18.993330  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:19.013308  158374 logs.go:282] 0 containers: []
	W1222 22:59:19.013325  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:19.013335  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:19.013346  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:19.063311  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:19.063330  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:19.078990  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:19.079012  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:19.135746  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:19.127970   21956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:19.128563   21956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:19.130146   21956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:19.130556   21956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:19.132525   21956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:19.127970   21956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:19.128563   21956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:19.130146   21956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:19.130556   21956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:19.132525   21956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:19.135757  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:19.135778  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:19.165331  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:19.165348  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:21.694471  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:21.705412  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:21.724588  158374 logs.go:282] 0 containers: []
	W1222 22:59:21.724617  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:21.724663  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:21.744659  158374 logs.go:282] 0 containers: []
	W1222 22:59:21.744677  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:21.744732  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:21.762841  158374 logs.go:282] 0 containers: []
	W1222 22:59:21.762858  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:21.762913  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:21.782008  158374 logs.go:282] 0 containers: []
	W1222 22:59:21.782023  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:21.782064  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:21.801013  158374 logs.go:282] 0 containers: []
	W1222 22:59:21.801031  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:21.801077  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:21.817861  158374 logs.go:282] 0 containers: []
	W1222 22:59:21.817879  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:21.817936  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:21.836076  158374 logs.go:282] 0 containers: []
	W1222 22:59:21.836093  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:21.836104  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:21.836115  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:21.884827  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:21.884849  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:21.900053  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:21.900069  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:21.955238  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:21.947967   22101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:21.948481   22101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:21.950007   22101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:21.950444   22101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:21.952014   22101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:21.947967   22101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:21.948481   22101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:21.950007   22101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:21.950444   22101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:21.952014   22101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:21.955248  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:21.955258  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:21.984138  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:21.984157  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:24.515104  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:24.526883  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:24.546166  158374 logs.go:282] 0 containers: []
	W1222 22:59:24.546180  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:24.546228  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:24.565305  158374 logs.go:282] 0 containers: []
	W1222 22:59:24.565319  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:24.565361  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:24.584559  158374 logs.go:282] 0 containers: []
	W1222 22:59:24.584572  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:24.584631  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:24.604650  158374 logs.go:282] 0 containers: []
	W1222 22:59:24.604664  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:24.604712  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:24.623346  158374 logs.go:282] 0 containers: []
	W1222 22:59:24.623362  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:24.623412  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:24.642324  158374 logs.go:282] 0 containers: []
	W1222 22:59:24.642343  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:24.642406  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:24.661990  158374 logs.go:282] 0 containers: []
	W1222 22:59:24.662004  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:24.662013  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:24.662024  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:24.677840  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:24.677855  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:24.734271  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:24.726969   22258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:24.727498   22258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:24.729051   22258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:24.729473   22258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:24.731041   22258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:24.726969   22258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:24.727498   22258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:24.729051   22258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:24.729473   22258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:24.731041   22258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:24.734289  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:24.734304  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:24.764562  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:24.764580  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:24.793099  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:24.793115  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:27.340497  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:27.351904  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:27.372400  158374 logs.go:282] 0 containers: []
	W1222 22:59:27.372419  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:27.372472  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:27.392295  158374 logs.go:282] 0 containers: []
	W1222 22:59:27.392312  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:27.392363  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:27.411771  158374 logs.go:282] 0 containers: []
	W1222 22:59:27.411784  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:27.411828  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:27.430497  158374 logs.go:282] 0 containers: []
	W1222 22:59:27.430512  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:27.430558  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:27.449983  158374 logs.go:282] 0 containers: []
	W1222 22:59:27.449999  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:27.450044  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:27.469696  158374 logs.go:282] 0 containers: []
	W1222 22:59:27.469714  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:27.469771  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:27.488685  158374 logs.go:282] 0 containers: []
	W1222 22:59:27.488702  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:27.488715  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:27.488730  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:27.517546  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:27.517564  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:27.564530  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:27.564554  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:27.579944  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:27.579963  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:27.636369  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:27.629189   22431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:27.629734   22431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:27.631292   22431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:27.631750   22431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:27.633229   22431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:27.629189   22431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:27.629734   22431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:27.631292   22431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:27.631750   22431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:27.633229   22431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:27.636383  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:27.636394  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:30.168117  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:30.179633  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:30.199078  158374 logs.go:282] 0 containers: []
	W1222 22:59:30.199094  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:30.199144  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:30.218504  158374 logs.go:282] 0 containers: []
	W1222 22:59:30.218517  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:30.218559  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:30.237792  158374 logs.go:282] 0 containers: []
	W1222 22:59:30.237810  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:30.237858  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:30.257058  158374 logs.go:282] 0 containers: []
	W1222 22:59:30.257073  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:30.257118  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:30.277405  158374 logs.go:282] 0 containers: []
	W1222 22:59:30.277422  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:30.277475  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:30.297453  158374 logs.go:282] 0 containers: []
	W1222 22:59:30.297467  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:30.297515  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:30.316894  158374 logs.go:282] 0 containers: []
	W1222 22:59:30.316915  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:30.316924  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:30.316936  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:30.346684  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:30.346705  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:30.376362  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:30.376378  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:30.422918  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:30.422940  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:30.438917  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:30.438935  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:30.494621  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:30.487113   22590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:30.487790   22590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:30.489338   22590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:30.489779   22590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:30.491378   22590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:30.487113   22590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:30.487790   22590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:30.489338   22590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:30.489779   22590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:30.491378   22590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:32.995681  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:33.006896  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:33.026274  158374 logs.go:282] 0 containers: []
	W1222 22:59:33.026292  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:33.026336  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:33.045071  158374 logs.go:282] 0 containers: []
	W1222 22:59:33.045087  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:33.045134  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:33.064583  158374 logs.go:282] 0 containers: []
	W1222 22:59:33.064611  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:33.064660  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:33.085351  158374 logs.go:282] 0 containers: []
	W1222 22:59:33.085374  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:33.085431  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:33.103978  158374 logs.go:282] 0 containers: []
	W1222 22:59:33.103991  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:33.104045  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:33.123168  158374 logs.go:282] 0 containers: []
	W1222 22:59:33.123186  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:33.123241  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:33.143080  158374 logs.go:282] 0 containers: []
	W1222 22:59:33.143095  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:33.143105  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:33.143116  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:33.197825  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:33.190806   22712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:33.191305   22712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:33.192895   22712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:33.193360   22712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:33.194895   22712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:33.190806   22712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:33.191305   22712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:33.192895   22712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:33.193360   22712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:33.194895   22712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:33.197836  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:33.197850  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:33.226457  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:33.226476  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:33.257519  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:33.257546  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:33.309950  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:33.309971  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:35.827217  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:35.838617  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:35.858342  158374 logs.go:282] 0 containers: []
	W1222 22:59:35.858358  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:35.858412  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:35.877344  158374 logs.go:282] 0 containers: []
	W1222 22:59:35.877362  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:35.877416  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:35.897833  158374 logs.go:282] 0 containers: []
	W1222 22:59:35.897848  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:35.897902  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:35.916409  158374 logs.go:282] 0 containers: []
	W1222 22:59:35.916428  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:35.916485  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:35.935688  158374 logs.go:282] 0 containers: []
	W1222 22:59:35.935705  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:35.935766  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:35.954858  158374 logs.go:282] 0 containers: []
	W1222 22:59:35.954876  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:35.954924  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:35.973729  158374 logs.go:282] 0 containers: []
	W1222 22:59:35.973746  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:35.973757  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:35.973767  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:36.002045  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:36.002069  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:36.029933  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:36.029949  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:36.075963  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:36.075988  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:36.091711  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:36.091734  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:36.147521  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:36.140402   22889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:36.141008   22889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:36.142529   22889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:36.142962   22889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:36.144445   22889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:36.140402   22889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:36.141008   22889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:36.142529   22889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:36.142962   22889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:36.144445   22889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:38.649172  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:38.660310  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:38.679380  158374 logs.go:282] 0 containers: []
	W1222 22:59:38.679396  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:38.679449  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:38.698305  158374 logs.go:282] 0 containers: []
	W1222 22:59:38.698318  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:38.698365  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:38.717524  158374 logs.go:282] 0 containers: []
	W1222 22:59:38.717541  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:38.717601  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:38.736808  158374 logs.go:282] 0 containers: []
	W1222 22:59:38.736822  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:38.736874  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:38.756003  158374 logs.go:282] 0 containers: []
	W1222 22:59:38.756017  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:38.756061  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:38.774845  158374 logs.go:282] 0 containers: []
	W1222 22:59:38.774858  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:38.774901  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:38.793240  158374 logs.go:282] 0 containers: []
	W1222 22:59:38.793257  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:38.793269  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:38.793281  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:38.821390  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:38.821407  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:38.868649  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:38.868671  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:38.884729  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:38.884749  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:38.940189  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:38.933255   23042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:38.933828   23042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:38.935320   23042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:38.935747   23042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:38.937211   23042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:38.933255   23042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:38.933828   23042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:38.935320   23042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:38.935747   23042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:38.937211   23042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:38.940200  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:38.940211  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:41.470854  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:41.481957  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:41.501032  158374 logs.go:282] 0 containers: []
	W1222 22:59:41.501051  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:41.501102  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:41.522720  158374 logs.go:282] 0 containers: []
	W1222 22:59:41.522740  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:41.522799  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:41.544756  158374 logs.go:282] 0 containers: []
	W1222 22:59:41.544769  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:41.544812  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:41.564773  158374 logs.go:282] 0 containers: []
	W1222 22:59:41.564789  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:41.565312  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:41.586087  158374 logs.go:282] 0 containers: []
	W1222 22:59:41.586104  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:41.586156  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:41.604141  158374 logs.go:282] 0 containers: []
	W1222 22:59:41.604156  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:41.604206  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:41.623828  158374 logs.go:282] 0 containers: []
	W1222 22:59:41.623846  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:41.623858  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:41.623870  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:41.652778  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:41.652798  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:41.680995  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:41.681014  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:41.728777  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:41.728800  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:41.744897  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:41.744916  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:41.800644  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:41.793494   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:41.794046   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:41.795564   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:41.796035   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:41.797584   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:41.793494   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:41.794046   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:41.795564   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:41.796035   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:41.797584   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:44.302472  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:44.313688  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:44.333253  158374 logs.go:282] 0 containers: []
	W1222 22:59:44.333267  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:44.333313  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:44.352778  158374 logs.go:282] 0 containers: []
	W1222 22:59:44.352793  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:44.352851  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:44.372079  158374 logs.go:282] 0 containers: []
	W1222 22:59:44.372093  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:44.372135  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:44.390683  158374 logs.go:282] 0 containers: []
	W1222 22:59:44.390701  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:44.390761  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:44.409168  158374 logs.go:282] 0 containers: []
	W1222 22:59:44.409185  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:44.409259  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:44.426368  158374 logs.go:282] 0 containers: []
	W1222 22:59:44.426381  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:44.426426  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:44.444108  158374 logs.go:282] 0 containers: []
	W1222 22:59:44.444124  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:44.444138  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:44.444148  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:44.481663  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:44.481679  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:44.529101  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:44.529121  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:44.546062  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:44.546081  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:44.600660  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:44.593909   23362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:44.594437   23362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:44.595959   23362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:44.596345   23362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:44.597904   23362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:44.593909   23362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:44.594437   23362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:44.595959   23362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:44.596345   23362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:44.597904   23362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:44.600672  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:44.600684  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:47.129588  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:47.140641  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:47.159435  158374 logs.go:282] 0 containers: []
	W1222 22:59:47.159453  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:47.159498  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:47.178540  158374 logs.go:282] 0 containers: []
	W1222 22:59:47.178560  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:47.178634  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:47.198365  158374 logs.go:282] 0 containers: []
	W1222 22:59:47.198383  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:47.198438  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:47.217411  158374 logs.go:282] 0 containers: []
	W1222 22:59:47.217429  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:47.217479  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:47.236273  158374 logs.go:282] 0 containers: []
	W1222 22:59:47.236287  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:47.236330  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:47.255917  158374 logs.go:282] 0 containers: []
	W1222 22:59:47.255930  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:47.255973  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:47.274750  158374 logs.go:282] 0 containers: []
	W1222 22:59:47.274768  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:47.274779  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:47.274792  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:47.322428  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:47.322452  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:47.339666  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:47.339691  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:47.396552  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:47.389135   23492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:47.389730   23492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:47.391303   23492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:47.391736   23492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:47.393254   23492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:47.389135   23492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:47.389730   23492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:47.391303   23492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:47.391736   23492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:47.393254   23492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:47.396562  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:47.396574  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:47.425768  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:47.425785  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:49.955844  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:49.966834  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:49.985390  158374 logs.go:282] 0 containers: []
	W1222 22:59:49.985405  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:49.985446  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:50.003669  158374 logs.go:282] 0 containers: []
	W1222 22:59:50.003687  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:50.003735  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:50.023188  158374 logs.go:282] 0 containers: []
	W1222 22:59:50.023203  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:50.023254  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:50.042292  158374 logs.go:282] 0 containers: []
	W1222 22:59:50.042309  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:50.042360  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:50.060457  158374 logs.go:282] 0 containers: []
	W1222 22:59:50.060471  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:50.060516  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:50.078548  158374 logs.go:282] 0 containers: []
	W1222 22:59:50.078565  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:50.078666  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:50.096685  158374 logs.go:282] 0 containers: []
	W1222 22:59:50.096704  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:50.096717  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:50.096730  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:50.125658  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:50.125680  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:50.173107  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:50.173124  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:50.188136  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:50.188152  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:50.242225  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:50.235426   23662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:50.235931   23662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:50.237475   23662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:50.237960   23662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:50.239487   23662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:50.235426   23662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:50.235931   23662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:50.237475   23662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:50.237960   23662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:50.239487   23662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:50.242236  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:50.242246  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:52.771712  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:52.783330  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:52.802157  158374 logs.go:282] 0 containers: []
	W1222 22:59:52.802171  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:52.802219  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:52.820709  158374 logs.go:282] 0 containers: []
	W1222 22:59:52.820726  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:52.820777  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:52.839433  158374 logs.go:282] 0 containers: []
	W1222 22:59:52.839448  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:52.839515  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:52.857834  158374 logs.go:282] 0 containers: []
	W1222 22:59:52.857849  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:52.857903  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:52.875916  158374 logs.go:282] 0 containers: []
	W1222 22:59:52.875933  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:52.875977  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:52.893339  158374 logs.go:282] 0 containers: []
	W1222 22:59:52.893351  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:52.893394  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:52.911298  158374 logs.go:282] 0 containers: []
	W1222 22:59:52.911311  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:52.911319  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:52.911329  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:52.942377  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:52.942392  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:52.969572  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:52.969587  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:53.014323  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:53.014339  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:53.029751  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:53.029764  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:53.085527  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:53.078407   23822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:53.078987   23822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:53.080619   23822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:53.081049   23822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:53.082721   23822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:53.078407   23822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:53.078987   23822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:53.080619   23822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:53.081049   23822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:53.082721   23822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:55.587247  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:55.598436  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:55.617688  158374 logs.go:282] 0 containers: []
	W1222 22:59:55.617704  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:55.617764  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:55.637510  158374 logs.go:282] 0 containers: []
	W1222 22:59:55.637528  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:55.637585  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:55.656117  158374 logs.go:282] 0 containers: []
	W1222 22:59:55.656132  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:55.656187  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:55.675258  158374 logs.go:282] 0 containers: []
	W1222 22:59:55.675278  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:55.675327  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:55.694537  158374 logs.go:282] 0 containers: []
	W1222 22:59:55.694555  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:55.694627  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:55.711993  158374 logs.go:282] 0 containers: []
	W1222 22:59:55.712011  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:55.712056  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:55.730198  158374 logs.go:282] 0 containers: []
	W1222 22:59:55.730216  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:55.730228  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:55.730242  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:55.795390  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:55.795416  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:55.811790  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:55.811809  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:55.867201  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:55.859754   23959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:55.860414   23959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:55.862051   23959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:55.862474   23959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:55.863985   23959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:55.859754   23959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:55.860414   23959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:55.862051   23959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:55.862474   23959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:55.863985   23959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:55.867213  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:55.867224  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:55.898358  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:55.898381  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:58.428962  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:58.440024  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:58.459773  158374 logs.go:282] 0 containers: []
	W1222 22:59:58.459787  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:58.459828  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:58.478843  158374 logs.go:282] 0 containers: []
	W1222 22:59:58.478863  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:58.478920  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:58.498503  158374 logs.go:282] 0 containers: []
	W1222 22:59:58.498518  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:58.498563  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:58.518032  158374 logs.go:282] 0 containers: []
	W1222 22:59:58.518052  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:58.518110  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:58.537315  158374 logs.go:282] 0 containers: []
	W1222 22:59:58.537330  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:58.537388  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:58.556299  158374 logs.go:282] 0 containers: []
	W1222 22:59:58.556319  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:58.556368  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:58.575345  158374 logs.go:282] 0 containers: []
	W1222 22:59:58.575359  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:58.575369  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:58.575378  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:58.603490  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:58.603508  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:58.651589  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:58.651620  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:58.667341  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:58.667358  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:58.723840  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:58.716532   24120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:58.717054   24120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:58.718649   24120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:58.719098   24120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:58.720749   24120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:58.716532   24120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:58.717054   24120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:58.718649   24120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:58.719098   24120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:58.720749   24120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:58.723855  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:58.723865  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:01.257052  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:01.268153  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:01.287939  158374 logs.go:282] 0 containers: []
	W1222 23:00:01.287954  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:01.288001  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:01.306844  158374 logs.go:282] 0 containers: []
	W1222 23:00:01.306857  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:01.306904  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:01.326511  158374 logs.go:282] 0 containers: []
	W1222 23:00:01.326530  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:01.326579  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:01.345734  158374 logs.go:282] 0 containers: []
	W1222 23:00:01.345748  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:01.345793  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:01.364619  158374 logs.go:282] 0 containers: []
	W1222 23:00:01.364634  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:01.364682  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:01.383578  158374 logs.go:282] 0 containers: []
	W1222 23:00:01.383605  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:01.383654  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:01.401753  158374 logs.go:282] 0 containers: []
	W1222 23:00:01.401770  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:01.401781  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:01.401795  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:01.457583  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:01.450373   24259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:01.450906   24259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:01.452493   24259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:01.452946   24259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:01.454493   24259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:01.450373   24259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:01.450906   24259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:01.452493   24259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:01.452946   24259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:01.454493   24259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:01.457611  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:01.457625  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:01.486870  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:01.486891  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:01.514587  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:01.514619  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:01.561028  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:01.561052  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:04.078615  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:04.089843  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:04.109432  158374 logs.go:282] 0 containers: []
	W1222 23:00:04.109450  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:04.109498  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:04.128585  158374 logs.go:282] 0 containers: []
	W1222 23:00:04.128630  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:04.128680  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:04.147830  158374 logs.go:282] 0 containers: []
	W1222 23:00:04.147846  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:04.147901  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:04.166672  158374 logs.go:282] 0 containers: []
	W1222 23:00:04.166686  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:04.166730  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:04.185500  158374 logs.go:282] 0 containers: []
	W1222 23:00:04.185523  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:04.185574  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:04.204345  158374 logs.go:282] 0 containers: []
	W1222 23:00:04.204360  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:04.204404  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:04.222488  158374 logs.go:282] 0 containers: []
	W1222 23:00:04.222503  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:04.222513  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:04.222523  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:04.252225  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:04.252244  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:04.280489  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:04.280507  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:04.329635  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:04.329657  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:04.345631  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:04.345650  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:04.400851  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:04.393656   24445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:04.394230   24445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:04.395887   24445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:04.396281   24445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:04.397838   24445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:04.393656   24445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:04.394230   24445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:04.395887   24445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:04.396281   24445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:04.397838   24445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:06.901498  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:06.913084  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:06.932724  158374 logs.go:282] 0 containers: []
	W1222 23:00:06.932739  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:06.932793  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:06.951127  158374 logs.go:282] 0 containers: []
	W1222 23:00:06.951146  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:06.951187  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:06.969488  158374 logs.go:282] 0 containers: []
	W1222 23:00:06.969501  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:06.969543  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:06.987763  158374 logs.go:282] 0 containers: []
	W1222 23:00:06.987780  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:06.987824  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:07.005884  158374 logs.go:282] 0 containers: []
	W1222 23:00:07.005900  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:07.005951  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:07.026370  158374 logs.go:282] 0 containers: []
	W1222 23:00:07.026397  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:07.026449  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:07.047472  158374 logs.go:282] 0 containers: []
	W1222 23:00:07.047486  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:07.047496  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:07.047505  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:07.092662  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:07.092679  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:07.107657  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:07.107672  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:07.162182  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:07.155104   24590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:07.155706   24590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:07.157237   24590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:07.157737   24590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:07.159269   24590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:07.155104   24590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:07.155706   24590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:07.157237   24590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:07.157737   24590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:07.159269   24590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:07.162193  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:07.162203  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:07.190466  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:07.190482  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:09.719767  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:09.730961  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:09.750004  158374 logs.go:282] 0 containers: []
	W1222 23:00:09.750021  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:09.750061  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:09.768191  158374 logs.go:282] 0 containers: []
	W1222 23:00:09.768203  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:09.768240  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:09.785655  158374 logs.go:282] 0 containers: []
	W1222 23:00:09.785668  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:09.785715  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:09.803931  158374 logs.go:282] 0 containers: []
	W1222 23:00:09.803946  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:09.803987  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:09.823040  158374 logs.go:282] 0 containers: []
	W1222 23:00:09.823058  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:09.823105  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:09.841359  158374 logs.go:282] 0 containers: []
	W1222 23:00:09.841373  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:09.841413  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:09.859786  158374 logs.go:282] 0 containers: []
	W1222 23:00:09.859799  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:09.859812  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:09.859824  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:09.905428  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:09.905445  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:09.920496  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:09.920511  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:09.974948  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:09.968124   24736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:09.968667   24736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:09.970182   24736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:09.970583   24736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:09.972119   24736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:09.968124   24736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:09.968667   24736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:09.970182   24736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:09.970583   24736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:09.972119   24736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:09.974969  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:09.974982  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:10.003466  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:10.003485  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:12.535644  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:12.546867  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:12.565761  158374 logs.go:282] 0 containers: []
	W1222 23:00:12.565778  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:12.565825  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:12.584431  158374 logs.go:282] 0 containers: []
	W1222 23:00:12.584446  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:12.584504  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:12.602950  158374 logs.go:282] 0 containers: []
	W1222 23:00:12.602966  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:12.603009  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:12.621210  158374 logs.go:282] 0 containers: []
	W1222 23:00:12.621224  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:12.621268  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:12.639377  158374 logs.go:282] 0 containers: []
	W1222 23:00:12.639393  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:12.639444  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:12.657924  158374 logs.go:282] 0 containers: []
	W1222 23:00:12.657941  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:12.657984  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:12.676311  158374 logs.go:282] 0 containers: []
	W1222 23:00:12.676326  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:12.676336  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:12.676346  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:12.703500  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:12.703515  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:12.750933  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:12.750951  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:12.766856  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:12.766870  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:12.822138  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:12.815289   24904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:12.815808   24904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:12.817339   24904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:12.817801   24904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:12.819283   24904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:12.815289   24904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:12.815808   24904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:12.817339   24904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:12.817801   24904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:12.819283   24904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:12.822170  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:12.822269  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:15.355685  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:15.366722  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:15.385319  158374 logs.go:282] 0 containers: []
	W1222 23:00:15.385334  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:15.385401  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:15.402653  158374 logs.go:282] 0 containers: []
	W1222 23:00:15.402666  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:15.402712  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:15.420695  158374 logs.go:282] 0 containers: []
	W1222 23:00:15.420709  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:15.420757  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:15.438422  158374 logs.go:282] 0 containers: []
	W1222 23:00:15.438438  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:15.438488  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:15.457961  158374 logs.go:282] 0 containers: []
	W1222 23:00:15.457978  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:15.458023  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:15.477016  158374 logs.go:282] 0 containers: []
	W1222 23:00:15.477031  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:15.477075  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:15.495320  158374 logs.go:282] 0 containers: []
	W1222 23:00:15.495335  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:15.495346  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:15.495363  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:15.542697  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:15.542716  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:15.557986  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:15.558002  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:15.613071  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:15.605742   25048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:15.606273   25048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:15.607863   25048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:15.608322   25048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:15.609898   25048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:15.605742   25048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:15.606273   25048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:15.607863   25048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:15.608322   25048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:15.609898   25048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:15.613082  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:15.613093  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:15.643893  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:15.643912  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:18.176478  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:18.187435  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:18.206820  158374 logs.go:282] 0 containers: []
	W1222 23:00:18.206836  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:18.206885  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:18.225162  158374 logs.go:282] 0 containers: []
	W1222 23:00:18.225179  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:18.225242  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:18.244089  158374 logs.go:282] 0 containers: []
	W1222 23:00:18.244106  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:18.244149  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:18.263582  158374 logs.go:282] 0 containers: []
	W1222 23:00:18.263618  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:18.263678  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:18.285421  158374 logs.go:282] 0 containers: []
	W1222 23:00:18.285439  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:18.285483  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:18.304575  158374 logs.go:282] 0 containers: []
	W1222 23:00:18.304616  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:18.304679  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:18.322814  158374 logs.go:282] 0 containers: []
	W1222 23:00:18.322831  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:18.322842  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:18.322853  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:18.367678  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:18.367695  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:18.384038  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:18.384060  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:18.439158  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:18.432401   25210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:18.432947   25210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:18.434455   25210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:18.434840   25210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:18.436266   25210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:18.432401   25210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:18.432947   25210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:18.434455   25210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:18.434840   25210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:18.436266   25210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:18.439172  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:18.439186  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:18.468274  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:18.468290  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:20.996786  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:21.007676  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:21.026577  158374 logs.go:282] 0 containers: []
	W1222 23:00:21.026589  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:21.026662  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:21.045179  158374 logs.go:282] 0 containers: []
	W1222 23:00:21.045195  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:21.045237  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:21.064216  158374 logs.go:282] 0 containers: []
	W1222 23:00:21.064230  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:21.064278  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:21.082929  158374 logs.go:282] 0 containers: []
	W1222 23:00:21.082946  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:21.082991  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:21.101298  158374 logs.go:282] 0 containers: []
	W1222 23:00:21.101314  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:21.101372  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:21.119708  158374 logs.go:282] 0 containers: []
	W1222 23:00:21.119719  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:21.119759  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:21.137828  158374 logs.go:282] 0 containers: []
	W1222 23:00:21.137841  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:21.137849  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:21.137859  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:21.167198  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:21.167214  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:21.194956  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:21.194974  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:21.243666  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:21.243687  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:21.259092  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:21.259108  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:21.316128  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:21.309163   25383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:21.309711   25383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:21.311266   25383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:21.311721   25383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:21.313183   25383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:21.309163   25383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:21.309711   25383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:21.311266   25383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:21.311721   25383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:21.313183   25383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:23.817830  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:23.829010  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:23.847819  158374 logs.go:282] 0 containers: []
	W1222 23:00:23.847833  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:23.847883  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:23.866626  158374 logs.go:282] 0 containers: []
	W1222 23:00:23.866640  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:23.866685  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:23.884038  158374 logs.go:282] 0 containers: []
	W1222 23:00:23.884053  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:23.884099  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:23.903021  158374 logs.go:282] 0 containers: []
	W1222 23:00:23.903037  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:23.903091  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:23.921758  158374 logs.go:282] 0 containers: []
	W1222 23:00:23.921771  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:23.921817  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:23.940118  158374 logs.go:282] 0 containers: []
	W1222 23:00:23.940135  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:23.940176  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:23.958805  158374 logs.go:282] 0 containers: []
	W1222 23:00:23.958817  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:23.958826  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:23.958836  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:24.006524  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:24.006542  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:24.021579  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:24.021602  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:24.077965  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:24.070852   25512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:24.071400   25512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:24.072995   25512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:24.073395   25512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:24.074970   25512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:24.070852   25512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:24.071400   25512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:24.072995   25512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:24.073395   25512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:24.074970   25512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:24.077976  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:24.077986  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:24.107448  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:24.107464  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:26.635419  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:26.646546  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:26.665787  158374 logs.go:282] 0 containers: []
	W1222 23:00:26.665805  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:26.665856  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:26.683869  158374 logs.go:282] 0 containers: []
	W1222 23:00:26.683885  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:26.683930  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:26.702549  158374 logs.go:282] 0 containers: []
	W1222 23:00:26.702565  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:26.702628  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:26.720884  158374 logs.go:282] 0 containers: []
	W1222 23:00:26.720901  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:26.720947  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:26.739437  158374 logs.go:282] 0 containers: []
	W1222 23:00:26.739453  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:26.739498  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:26.757871  158374 logs.go:282] 0 containers: []
	W1222 23:00:26.757885  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:26.757927  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:26.775863  158374 logs.go:282] 0 containers: []
	W1222 23:00:26.775882  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:26.775893  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:26.775902  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:26.821886  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:26.821903  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:26.837204  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:26.837220  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:26.891970  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:26.884764   25667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:26.885231   25667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:26.886895   25667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:26.887281   25667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:26.888844   25667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:26.884764   25667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:26.885231   25667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:26.886895   25667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:26.887281   25667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:26.888844   25667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:26.891981  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:26.891991  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:26.922932  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:26.922949  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:29.452400  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:29.463551  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:29.482265  158374 logs.go:282] 0 containers: []
	W1222 23:00:29.482278  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:29.482326  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:29.501689  158374 logs.go:282] 0 containers: []
	W1222 23:00:29.501707  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:29.501762  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:29.522730  158374 logs.go:282] 0 containers: []
	W1222 23:00:29.522747  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:29.522799  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:29.542657  158374 logs.go:282] 0 containers: []
	W1222 23:00:29.542671  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:29.542720  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:29.560883  158374 logs.go:282] 0 containers: []
	W1222 23:00:29.560897  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:29.560938  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:29.579281  158374 logs.go:282] 0 containers: []
	W1222 23:00:29.579297  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:29.579340  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:29.597740  158374 logs.go:282] 0 containers: []
	W1222 23:00:29.597755  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:29.597766  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:29.597777  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:29.627231  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:29.627248  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:29.655168  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:29.655183  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:29.703330  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:29.703348  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:29.718800  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:29.718821  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:29.773515  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:29.766735   25841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:29.767220   25841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:29.768799   25841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:29.769197   25841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:29.770715   25841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:29.766735   25841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:29.767220   25841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:29.768799   25841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:29.769197   25841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:29.770715   25841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:32.274411  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:32.285356  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:32.304409  158374 logs.go:282] 0 containers: []
	W1222 23:00:32.304423  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:32.304465  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:32.324167  158374 logs.go:282] 0 containers: []
	W1222 23:00:32.324183  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:32.324228  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:32.342878  158374 logs.go:282] 0 containers: []
	W1222 23:00:32.342893  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:32.342950  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:32.362212  158374 logs.go:282] 0 containers: []
	W1222 23:00:32.362226  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:32.362268  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:32.381154  158374 logs.go:282] 0 containers: []
	W1222 23:00:32.381171  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:32.381229  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:32.400512  158374 logs.go:282] 0 containers: []
	W1222 23:00:32.400533  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:32.400587  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:32.419083  158374 logs.go:282] 0 containers: []
	W1222 23:00:32.419097  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:32.419112  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:32.419121  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:32.466805  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:32.466824  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:32.482931  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:32.482947  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:32.543407  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:32.536205   25971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:32.536750   25971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:32.538320   25971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:32.538796   25971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:32.540418   25971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:32.536205   25971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:32.536750   25971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:32.538320   25971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:32.538796   25971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:32.540418   25971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:32.543419  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:32.543436  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:32.572975  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:32.572990  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:35.102503  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:35.113391  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:35.132097  158374 logs.go:282] 0 containers: []
	W1222 23:00:35.132109  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:35.132151  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:35.150359  158374 logs.go:282] 0 containers: []
	W1222 23:00:35.150378  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:35.150441  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:35.169070  158374 logs.go:282] 0 containers: []
	W1222 23:00:35.169088  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:35.169141  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:35.187626  158374 logs.go:282] 0 containers: []
	W1222 23:00:35.187641  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:35.187686  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:35.205837  158374 logs.go:282] 0 containers: []
	W1222 23:00:35.205854  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:35.205895  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:35.224198  158374 logs.go:282] 0 containers: []
	W1222 23:00:35.224213  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:35.224255  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:35.241750  158374 logs.go:282] 0 containers: []
	W1222 23:00:35.241765  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:35.241774  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:35.241783  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:35.286130  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:35.286145  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:35.301096  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:35.301111  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:35.356973  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:35.349818   26124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:35.350427   26124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:35.351993   26124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:35.352503   26124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:35.353993   26124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:35.349818   26124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:35.350427   26124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:35.351993   26124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:35.352503   26124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:35.353993   26124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:35.356986  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:35.356997  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:35.385504  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:35.385523  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:37.914534  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:37.925443  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:37.943928  158374 logs.go:282] 0 containers: []
	W1222 23:00:37.943942  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:37.943990  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:37.961408  158374 logs.go:282] 0 containers: []
	W1222 23:00:37.961424  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:37.961481  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:37.979364  158374 logs.go:282] 0 containers: []
	W1222 23:00:37.979380  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:37.979437  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:37.997737  158374 logs.go:282] 0 containers: []
	W1222 23:00:37.997751  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:37.997796  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:38.016341  158374 logs.go:282] 0 containers: []
	W1222 23:00:38.016358  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:38.016425  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:38.035203  158374 logs.go:282] 0 containers: []
	W1222 23:00:38.035221  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:38.035270  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:38.053655  158374 logs.go:282] 0 containers: []
	W1222 23:00:38.053672  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:38.053684  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:38.053699  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:38.100003  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:38.100022  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:38.116642  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:38.116661  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:38.170897  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:38.164046   26282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:38.164628   26282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:38.166176   26282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:38.166562   26282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:38.168042   26282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:38.164046   26282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:38.164628   26282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:38.166176   26282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:38.166562   26282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:38.168042   26282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:38.170907  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:38.170921  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:38.200254  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:38.200273  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:40.729556  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:40.740911  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:40.762071  158374 logs.go:282] 0 containers: []
	W1222 23:00:40.762085  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:40.762131  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:40.782191  158374 logs.go:282] 0 containers: []
	W1222 23:00:40.782207  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:40.782259  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:40.802303  158374 logs.go:282] 0 containers: []
	W1222 23:00:40.802318  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:40.802365  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:40.821101  158374 logs.go:282] 0 containers: []
	W1222 23:00:40.821115  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:40.821159  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:40.839813  158374 logs.go:282] 0 containers: []
	W1222 23:00:40.839830  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:40.839880  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:40.859473  158374 logs.go:282] 0 containers: []
	W1222 23:00:40.859490  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:40.859546  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:40.877060  158374 logs.go:282] 0 containers: []
	W1222 23:00:40.877076  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:40.877088  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:40.877101  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:40.922835  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:40.922852  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:40.938346  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:40.938361  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:40.993518  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:40.986693   26438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:40.987159   26438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:40.988687   26438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:40.989134   26438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:40.990683   26438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:40.986693   26438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:40.987159   26438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:40.988687   26438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:40.989134   26438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:40.990683   26438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:40.993531  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:40.993542  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:41.023093  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:41.023109  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:43.552889  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:43.564108  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:43.582897  158374 logs.go:282] 0 containers: []
	W1222 23:00:43.582914  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:43.582969  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:43.601736  158374 logs.go:282] 0 containers: []
	W1222 23:00:43.601750  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:43.601793  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:43.620069  158374 logs.go:282] 0 containers: []
	W1222 23:00:43.620083  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:43.620126  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:43.638251  158374 logs.go:282] 0 containers: []
	W1222 23:00:43.638269  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:43.638335  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:43.656816  158374 logs.go:282] 0 containers: []
	W1222 23:00:43.656829  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:43.656878  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:43.675289  158374 logs.go:282] 0 containers: []
	W1222 23:00:43.675302  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:43.675354  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:43.693789  158374 logs.go:282] 0 containers: []
	W1222 23:00:43.693805  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:43.693817  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:43.693828  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:43.749062  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:43.742036   26577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:43.742643   26577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:43.744184   26577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:43.744673   26577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:43.746198   26577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:43.742036   26577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:43.742643   26577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:43.744184   26577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:43.744673   26577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:43.746198   26577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:43.749078  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:43.749091  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:43.779321  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:43.779346  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:43.808611  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:43.808632  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:43.853216  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:43.853235  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:46.369073  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:46.380061  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:46.398781  158374 logs.go:282] 0 containers: []
	W1222 23:00:46.398797  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:46.398851  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:46.416817  158374 logs.go:282] 0 containers: []
	W1222 23:00:46.416834  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:46.416877  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:46.434863  158374 logs.go:282] 0 containers: []
	W1222 23:00:46.434877  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:46.434923  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:46.453147  158374 logs.go:282] 0 containers: []
	W1222 23:00:46.453164  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:46.453208  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:46.471210  158374 logs.go:282] 0 containers: []
	W1222 23:00:46.471224  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:46.471272  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:46.489455  158374 logs.go:282] 0 containers: []
	W1222 23:00:46.489468  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:46.489517  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:46.508022  158374 logs.go:282] 0 containers: []
	W1222 23:00:46.508039  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:46.508050  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:46.508061  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:46.555488  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:46.555505  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:46.571399  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:46.571415  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:46.626809  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:46.620219   26740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:46.620751   26740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:46.622257   26740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:46.622660   26740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:46.624126   26740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:46.620219   26740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:46.620751   26740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:46.622257   26740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:46.622660   26740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:46.624126   26740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:46.626822  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:46.626834  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:46.656631  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:46.656648  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:49.197674  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:49.208617  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:49.228203  158374 logs.go:282] 0 containers: []
	W1222 23:00:49.228221  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:49.228267  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:49.246623  158374 logs.go:282] 0 containers: []
	W1222 23:00:49.246638  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:49.246676  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:49.264794  158374 logs.go:282] 0 containers: []
	W1222 23:00:49.264810  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:49.264861  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:49.283414  158374 logs.go:282] 0 containers: []
	W1222 23:00:49.283431  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:49.283480  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:49.301735  158374 logs.go:282] 0 containers: []
	W1222 23:00:49.301748  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:49.301787  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:49.320079  158374 logs.go:282] 0 containers: []
	W1222 23:00:49.320092  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:49.320134  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:49.339280  158374 logs.go:282] 0 containers: []
	W1222 23:00:49.339296  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:49.339308  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:49.339324  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:49.354937  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:49.354953  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:49.409733  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:49.402775   26901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:49.403250   26901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:49.404843   26901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:49.405306   26901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:49.406844   26901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:49.402775   26901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:49.403250   26901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:49.404843   26901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:49.405306   26901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:49.406844   26901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:49.409744  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:49.409753  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:49.438748  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:49.438765  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:49.466948  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:49.466964  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:52.015822  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:52.027799  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:52.047753  158374 logs.go:282] 0 containers: []
	W1222 23:00:52.047770  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:52.047825  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:52.067486  158374 logs.go:282] 0 containers: []
	W1222 23:00:52.067502  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:52.067557  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:52.086094  158374 logs.go:282] 0 containers: []
	W1222 23:00:52.086110  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:52.086158  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:52.105497  158374 logs.go:282] 0 containers: []
	W1222 23:00:52.105513  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:52.105560  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:52.123560  158374 logs.go:282] 0 containers: []
	W1222 23:00:52.123575  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:52.123641  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:52.141887  158374 logs.go:282] 0 containers: []
	W1222 23:00:52.141905  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:52.141957  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:52.160464  158374 logs.go:282] 0 containers: []
	W1222 23:00:52.160480  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:52.160491  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:52.160500  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:52.207605  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:52.207623  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:52.222700  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:52.222714  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:52.277875  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:52.270341   27059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:52.270915   27059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:52.272985   27059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:52.273548   27059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:52.275027   27059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:52.270341   27059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:52.270915   27059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:52.272985   27059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:52.273548   27059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:52.275027   27059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:52.277887  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:52.277899  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:52.307146  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:52.307163  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:54.836681  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:54.847631  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:54.866945  158374 logs.go:282] 0 containers: []
	W1222 23:00:54.866961  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:54.867018  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:54.885478  158374 logs.go:282] 0 containers: []
	W1222 23:00:54.885491  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:54.885540  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:54.904643  158374 logs.go:282] 0 containers: []
	W1222 23:00:54.904657  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:54.904701  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:54.923539  158374 logs.go:282] 0 containers: []
	W1222 23:00:54.923554  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:54.923615  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:54.941322  158374 logs.go:282] 0 containers: []
	W1222 23:00:54.941338  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:54.941399  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:54.961766  158374 logs.go:282] 0 containers: []
	W1222 23:00:54.961785  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:54.961839  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:54.981417  158374 logs.go:282] 0 containers: []
	W1222 23:00:54.981432  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:54.981442  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:54.981452  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:55.012286  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:55.012306  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:55.043682  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:55.043703  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:55.091444  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:55.091466  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:55.107015  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:55.107039  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:55.162701  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:55.155754   27231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:55.156265   27231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:55.157880   27231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:55.158349   27231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:55.159815   27231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:55.155754   27231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:55.156265   27231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:55.157880   27231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:55.158349   27231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:55.159815   27231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:57.664308  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:57.675489  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:57.692858  158374 logs.go:282] 0 containers: []
	W1222 23:00:57.692875  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:57.692935  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:57.712522  158374 logs.go:282] 0 containers: []
	W1222 23:00:57.712538  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:57.712607  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:57.732210  158374 logs.go:282] 0 containers: []
	W1222 23:00:57.732226  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:57.732269  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:57.751532  158374 logs.go:282] 0 containers: []
	W1222 23:00:57.751545  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:57.751602  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:57.772243  158374 logs.go:282] 0 containers: []
	W1222 23:00:57.772257  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:57.772301  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:57.791227  158374 logs.go:282] 0 containers: []
	W1222 23:00:57.791243  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:57.791304  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:57.810525  158374 logs.go:282] 0 containers: []
	W1222 23:00:57.810543  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:57.810552  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:57.810561  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:57.858495  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:57.858513  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:57.873762  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:57.873777  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:57.929650  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:57.922350   27355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:57.922945   27355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:57.924490   27355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:57.924968   27355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:57.926535   27355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:57.922350   27355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:57.922945   27355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:57.924490   27355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:57.924968   27355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:57.926535   27355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:57.929662  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:57.929672  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:57.960293  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:57.960310  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:00.491408  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:00.502843  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:00.522074  158374 logs.go:282] 0 containers: []
	W1222 23:01:00.522090  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:00.522138  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:00.540871  158374 logs.go:282] 0 containers: []
	W1222 23:01:00.540888  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:00.540945  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:00.558913  158374 logs.go:282] 0 containers: []
	W1222 23:01:00.558931  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:00.558975  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:00.577980  158374 logs.go:282] 0 containers: []
	W1222 23:01:00.577997  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:00.578050  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:00.597037  158374 logs.go:282] 0 containers: []
	W1222 23:01:00.597056  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:00.597104  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:00.615867  158374 logs.go:282] 0 containers: []
	W1222 23:01:00.615881  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:00.615924  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:00.634566  158374 logs.go:282] 0 containers: []
	W1222 23:01:00.634581  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:00.634609  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:00.634623  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:00.663403  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:00.663425  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:00.712341  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:00.712364  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:00.729099  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:00.729121  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:00.785283  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:00.778009   27527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:00.778541   27527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:00.780082   27527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:00.780546   27527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:00.782086   27527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:00.778009   27527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:00.778541   27527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:00.780082   27527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:00.780546   27527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:00.782086   27527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:00.785294  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:00.785307  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:03.318166  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:03.329041  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:03.347864  158374 logs.go:282] 0 containers: []
	W1222 23:01:03.347878  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:03.347940  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:03.366921  158374 logs.go:282] 0 containers: []
	W1222 23:01:03.366937  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:03.366991  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:03.384054  158374 logs.go:282] 0 containers: []
	W1222 23:01:03.384070  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:03.384117  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:03.403365  158374 logs.go:282] 0 containers: []
	W1222 23:01:03.403380  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:03.403432  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:03.422487  158374 logs.go:282] 0 containers: []
	W1222 23:01:03.422501  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:03.422556  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:03.440736  158374 logs.go:282] 0 containers: []
	W1222 23:01:03.440751  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:03.440805  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:03.459891  158374 logs.go:282] 0 containers: []
	W1222 23:01:03.459906  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:03.459915  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:03.459926  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:03.508624  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:03.508646  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:03.525926  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:03.525946  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:03.581788  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:03.574395   27664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:03.574955   27664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:03.576542   27664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:03.577046   27664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:03.578534   27664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:03.574395   27664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:03.574955   27664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:03.576542   27664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:03.577046   27664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:03.578534   27664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:03.581798  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:03.581809  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:03.610547  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:03.610564  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:06.140960  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:06.152054  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:06.171277  158374 logs.go:282] 0 containers: []
	W1222 23:01:06.171290  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:06.171346  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:06.190005  158374 logs.go:282] 0 containers: []
	W1222 23:01:06.190024  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:06.190073  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:06.209033  158374 logs.go:282] 0 containers: []
	W1222 23:01:06.209052  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:06.209119  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:06.228351  158374 logs.go:282] 0 containers: []
	W1222 23:01:06.228368  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:06.228424  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:06.248668  158374 logs.go:282] 0 containers: []
	W1222 23:01:06.248682  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:06.248737  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:06.269569  158374 logs.go:282] 0 containers: []
	W1222 23:01:06.269587  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:06.269662  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:06.291826  158374 logs.go:282] 0 containers: []
	W1222 23:01:06.291841  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:06.291851  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:06.291861  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:06.337818  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:06.337839  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:06.353395  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:06.353413  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:06.410648  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:06.403406   27819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:06.403979   27819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:06.405629   27819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:06.406063   27819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:06.407632   27819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:06.403406   27819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:06.403979   27819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:06.405629   27819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:06.406063   27819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:06.407632   27819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:06.410666  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:06.410681  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:06.440427  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:06.440447  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:08.971014  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:08.982172  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:09.001296  158374 logs.go:282] 0 containers: []
	W1222 23:01:09.001315  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:09.001377  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:09.021010  158374 logs.go:282] 0 containers: []
	W1222 23:01:09.021025  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:09.021065  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:09.039299  158374 logs.go:282] 0 containers: []
	W1222 23:01:09.039315  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:09.039361  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:09.059055  158374 logs.go:282] 0 containers: []
	W1222 23:01:09.059069  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:09.059119  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:09.079065  158374 logs.go:282] 0 containers: []
	W1222 23:01:09.079080  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:09.079123  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:09.098129  158374 logs.go:282] 0 containers: []
	W1222 23:01:09.098148  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:09.098194  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:09.116949  158374 logs.go:282] 0 containers: []
	W1222 23:01:09.116965  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:09.116974  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:09.116984  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:09.172761  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:09.165842   27961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:09.166366   27961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:09.167907   27961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:09.168380   27961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:09.169881   27961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:09.165842   27961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:09.166366   27961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:09.167907   27961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:09.168380   27961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:09.169881   27961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:09.172771  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:09.172782  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:09.203094  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:09.203115  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:09.231372  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:09.231389  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:09.279710  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:09.279730  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:11.797972  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:11.809159  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:11.828406  158374 logs.go:282] 0 containers: []
	W1222 23:01:11.828423  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:11.828474  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:11.847173  158374 logs.go:282] 0 containers: []
	W1222 23:01:11.847194  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:11.847248  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:11.866123  158374 logs.go:282] 0 containers: []
	W1222 23:01:11.866141  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:11.866191  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:11.885374  158374 logs.go:282] 0 containers: []
	W1222 23:01:11.885388  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:11.885428  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:11.904372  158374 logs.go:282] 0 containers: []
	W1222 23:01:11.904386  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:11.904429  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:11.923420  158374 logs.go:282] 0 containers: []
	W1222 23:01:11.923437  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:11.923496  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:11.942294  158374 logs.go:282] 0 containers: []
	W1222 23:01:11.942332  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:11.942344  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:11.942356  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:11.999449  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:11.992239   28121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:11.992816   28121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:11.994388   28121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:11.994771   28121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:11.996307   28121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:11.992239   28121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:11.992816   28121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:11.994388   28121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:11.994771   28121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:11.996307   28121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:11.999465  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:11.999478  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:12.029498  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:12.029524  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:12.058708  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:12.058726  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:12.106818  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:12.106837  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:14.624265  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:14.635338  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:14.654466  158374 logs.go:282] 0 containers: []
	W1222 23:01:14.654481  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:14.654523  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:14.674801  158374 logs.go:282] 0 containers: []
	W1222 23:01:14.674817  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:14.674860  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:14.694258  158374 logs.go:282] 0 containers: []
	W1222 23:01:14.694275  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:14.694322  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:14.714916  158374 logs.go:282] 0 containers: []
	W1222 23:01:14.714932  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:14.714980  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:14.734141  158374 logs.go:282] 0 containers: []
	W1222 23:01:14.734155  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:14.734198  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:14.754093  158374 logs.go:282] 0 containers: []
	W1222 23:01:14.754108  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:14.754162  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:14.773451  158374 logs.go:282] 0 containers: []
	W1222 23:01:14.773468  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:14.773481  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:14.773496  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:14.830750  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:14.823428   28278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:14.824043   28278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:14.825689   28278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:14.826129   28278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:14.827703   28278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:14.823428   28278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:14.824043   28278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:14.825689   28278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:14.826129   28278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:14.827703   28278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:14.830760  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:14.830770  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:14.859787  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:14.859804  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:14.888611  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:14.888631  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:14.936097  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:14.936118  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:17.453191  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:17.464732  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:17.485010  158374 logs.go:282] 0 containers: []
	W1222 23:01:17.485025  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:17.485072  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:17.505952  158374 logs.go:282] 0 containers: []
	W1222 23:01:17.505969  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:17.506027  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:17.528761  158374 logs.go:282] 0 containers: []
	W1222 23:01:17.528776  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:17.528821  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:17.549296  158374 logs.go:282] 0 containers: []
	W1222 23:01:17.549312  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:17.549376  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:17.568100  158374 logs.go:282] 0 containers: []
	W1222 23:01:17.568117  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:17.568167  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:17.587017  158374 logs.go:282] 0 containers: []
	W1222 23:01:17.587034  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:17.587086  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:17.606020  158374 logs.go:282] 0 containers: []
	W1222 23:01:17.606036  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:17.606045  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:17.606061  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:17.621414  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:17.621430  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:17.677909  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:17.670771   28438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:17.671262   28438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:17.672895   28438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:17.673340   28438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:17.674869   28438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:17.670771   28438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:17.671262   28438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:17.672895   28438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:17.673340   28438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:17.674869   28438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:17.677925  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:17.677935  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:17.708117  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:17.708138  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:17.739554  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:17.739574  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:20.288948  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:20.299810  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:20.318929  158374 logs.go:282] 0 containers: []
	W1222 23:01:20.318945  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:20.319006  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:20.337556  158374 logs.go:282] 0 containers: []
	W1222 23:01:20.337573  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:20.337641  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:20.355705  158374 logs.go:282] 0 containers: []
	W1222 23:01:20.355718  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:20.355760  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:20.373672  158374 logs.go:282] 0 containers: []
	W1222 23:01:20.373686  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:20.373726  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:20.392616  158374 logs.go:282] 0 containers: []
	W1222 23:01:20.392631  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:20.392674  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:20.411253  158374 logs.go:282] 0 containers: []
	W1222 23:01:20.411270  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:20.411322  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:20.429537  158374 logs.go:282] 0 containers: []
	W1222 23:01:20.429552  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:20.429563  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:20.429575  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:20.445080  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:20.445098  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:20.501506  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:20.494080   28582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:20.494697   28582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:20.496376   28582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:20.496836   28582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:20.498365   28582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:20.494080   28582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:20.494697   28582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:20.496376   28582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:20.496836   28582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:20.498365   28582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:20.501520  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:20.501535  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:20.531907  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:20.531925  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:20.560530  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:20.560547  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:23.108846  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:23.119974  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:23.138613  158374 logs.go:282] 0 containers: []
	W1222 23:01:23.138631  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:23.138686  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:23.156921  158374 logs.go:282] 0 containers: []
	W1222 23:01:23.156935  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:23.156975  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:23.175153  158374 logs.go:282] 0 containers: []
	W1222 23:01:23.175166  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:23.175209  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:23.193263  158374 logs.go:282] 0 containers: []
	W1222 23:01:23.193295  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:23.193356  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:23.212207  158374 logs.go:282] 0 containers: []
	W1222 23:01:23.212221  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:23.212263  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:23.230929  158374 logs.go:282] 0 containers: []
	W1222 23:01:23.230945  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:23.231005  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:23.249646  158374 logs.go:282] 0 containers: []
	W1222 23:01:23.249659  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:23.249669  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:23.249679  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:23.277729  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:23.277745  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:23.324063  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:23.324082  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:23.339406  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:23.339425  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:23.394875  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:23.387312   28761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:23.387859   28761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:23.389847   28761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:23.390576   28761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:23.392042   28761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:23.387312   28761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:23.387859   28761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:23.389847   28761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:23.390576   28761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:23.392042   28761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:23.394885  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:23.394895  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:25.926075  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:25.937168  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:25.956016  158374 logs.go:282] 0 containers: []
	W1222 23:01:25.956028  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:25.956074  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:25.974143  158374 logs.go:282] 0 containers: []
	W1222 23:01:25.974159  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:25.974202  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:25.992434  158374 logs.go:282] 0 containers: []
	W1222 23:01:25.992449  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:25.992505  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:26.010399  158374 logs.go:282] 0 containers: []
	W1222 23:01:26.010415  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:26.010474  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:26.029958  158374 logs.go:282] 0 containers: []
	W1222 23:01:26.029974  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:26.030039  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:26.048962  158374 logs.go:282] 0 containers: []
	W1222 23:01:26.048977  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:26.049032  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:26.067563  158374 logs.go:282] 0 containers: []
	W1222 23:01:26.067581  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:26.067605  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:26.067627  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:26.115800  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:26.115818  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:26.131143  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:26.131160  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:26.187038  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:26.179580   28901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:26.180759   28901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:26.181222   28901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:26.182757   28901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:26.183141   28901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:26.179580   28901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:26.180759   28901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:26.181222   28901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:26.182757   28901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:26.183141   28901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:26.187049  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:26.187059  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:26.217787  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:26.217807  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:28.746733  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:28.757902  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:28.777575  158374 logs.go:282] 0 containers: []
	W1222 23:01:28.777608  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:28.777664  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:28.798343  158374 logs.go:282] 0 containers: []
	W1222 23:01:28.798363  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:28.798420  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:28.816843  158374 logs.go:282] 0 containers: []
	W1222 23:01:28.816859  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:28.816904  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:28.835441  158374 logs.go:282] 0 containers: []
	W1222 23:01:28.835458  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:28.835507  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:28.855127  158374 logs.go:282] 0 containers: []
	W1222 23:01:28.855139  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:28.855195  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:28.873368  158374 logs.go:282] 0 containers: []
	W1222 23:01:28.873381  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:28.873422  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:28.890543  158374 logs.go:282] 0 containers: []
	W1222 23:01:28.890556  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:28.890565  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:28.890575  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:28.918533  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:28.918553  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:28.963368  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:28.963389  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:28.978933  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:28.978951  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:29.035020  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:29.027770   29072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:29.028286   29072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:29.029825   29072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:29.030294   29072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:29.031906   29072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:29.027770   29072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:29.028286   29072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:29.029825   29072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:29.030294   29072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:29.031906   29072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:29.035033  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:29.035044  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:31.565580  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:31.576645  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:31.596094  158374 logs.go:282] 0 containers: []
	W1222 23:01:31.596110  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:31.596173  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:31.615329  158374 logs.go:282] 0 containers: []
	W1222 23:01:31.615345  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:31.615394  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:31.634047  158374 logs.go:282] 0 containers: []
	W1222 23:01:31.634065  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:31.634117  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:31.653553  158374 logs.go:282] 0 containers: []
	W1222 23:01:31.653567  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:31.653626  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:31.671338  158374 logs.go:282] 0 containers: []
	W1222 23:01:31.671354  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:31.671411  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:31.689466  158374 logs.go:282] 0 containers: []
	W1222 23:01:31.689482  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:31.689536  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:31.707731  158374 logs.go:282] 0 containers: []
	W1222 23:01:31.707744  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:31.707753  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:31.707761  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:31.759802  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:31.759828  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:31.776809  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:31.776828  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:31.833983  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:31.827071   29214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:31.827553   29214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:31.829051   29214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:31.829424   29214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:31.830959   29214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:31.827071   29214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:31.827553   29214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:31.829051   29214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:31.829424   29214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:31.830959   29214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:31.833996  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:31.834010  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:31.862984  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:31.863002  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:34.392435  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:34.403543  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:34.422799  158374 logs.go:282] 0 containers: []
	W1222 23:01:34.422814  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:34.422857  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:34.442014  158374 logs.go:282] 0 containers: []
	W1222 23:01:34.442029  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:34.442076  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:34.460995  158374 logs.go:282] 0 containers: []
	W1222 23:01:34.461007  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:34.461056  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:34.479671  158374 logs.go:282] 0 containers: []
	W1222 23:01:34.479688  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:34.479737  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:34.498097  158374 logs.go:282] 0 containers: []
	W1222 23:01:34.498113  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:34.498169  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:34.515924  158374 logs.go:282] 0 containers: []
	W1222 23:01:34.515939  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:34.515985  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:34.534433  158374 logs.go:282] 0 containers: []
	W1222 23:01:34.534447  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:34.534455  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:34.534466  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:34.589495  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:34.581716   29354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:34.582244   29354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:34.583869   29354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:34.584305   29354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:34.586383   29354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:34.581716   29354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:34.582244   29354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:34.583869   29354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:34.584305   29354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:34.586383   29354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:34.589505  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:34.589515  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:34.619333  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:34.619352  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:34.647369  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:34.647385  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:34.692228  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:34.692249  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:37.207836  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:37.218829  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:37.237894  158374 logs.go:282] 0 containers: []
	W1222 23:01:37.237908  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:37.237953  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:37.256000  158374 logs.go:282] 0 containers: []
	W1222 23:01:37.256012  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:37.256054  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:37.275097  158374 logs.go:282] 0 containers: []
	W1222 23:01:37.275112  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:37.275161  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:37.294044  158374 logs.go:282] 0 containers: []
	W1222 23:01:37.294059  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:37.294099  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:37.312979  158374 logs.go:282] 0 containers: []
	W1222 23:01:37.312995  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:37.313041  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:37.331662  158374 logs.go:282] 0 containers: []
	W1222 23:01:37.331675  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:37.331718  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:37.350197  158374 logs.go:282] 0 containers: []
	W1222 23:01:37.350212  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:37.350223  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:37.350233  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:37.397328  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:37.397345  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:37.412835  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:37.412852  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:37.468475  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:37.461747   29517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:37.462295   29517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:37.463832   29517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:37.464240   29517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:37.465745   29517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:37.461747   29517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:37.462295   29517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:37.463832   29517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:37.464240   29517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:37.465745   29517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:37.468487  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:37.468498  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:37.497915  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:37.497934  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:40.027516  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:40.039728  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:40.058705  158374 logs.go:282] 0 containers: []
	W1222 23:01:40.058719  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:40.058762  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:40.076167  158374 logs.go:282] 0 containers: []
	W1222 23:01:40.076184  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:40.076231  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:40.095983  158374 logs.go:282] 0 containers: []
	W1222 23:01:40.095996  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:40.096037  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:40.114657  158374 logs.go:282] 0 containers: []
	W1222 23:01:40.114670  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:40.114717  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:40.133015  158374 logs.go:282] 0 containers: []
	W1222 23:01:40.133028  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:40.133070  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:40.152127  158374 logs.go:282] 0 containers: []
	W1222 23:01:40.152140  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:40.152187  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:40.169569  158374 logs.go:282] 0 containers: []
	W1222 23:01:40.169583  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:40.169622  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:40.169636  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:40.184978  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:40.184992  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:40.238860  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:40.231964   29674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:40.232541   29674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:40.234087   29674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:40.234491   29674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:40.236049   29674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:40.231964   29674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:40.232541   29674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:40.234087   29674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:40.234491   29674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:40.236049   29674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:40.238870  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:40.238879  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:40.268146  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:40.268165  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:40.295931  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:40.295946  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:42.843708  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:42.854816  158374 kubeadm.go:602] duration metric: took 4m1.499731906s to restartPrimaryControlPlane
	W1222 23:01:42.854901  158374 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1222 23:01:42.854978  158374 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1222 23:01:43.257733  158374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 23:01:43.270528  158374 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 23:01:43.278400  158374 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 23:01:43.278439  158374 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 23:01:43.285901  158374 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 23:01:43.285910  158374 kubeadm.go:158] found existing configuration files:
	
	I1222 23:01:43.285947  158374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1222 23:01:43.293882  158374 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 23:01:43.293919  158374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 23:01:43.300825  158374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1222 23:01:43.307941  158374 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 23:01:43.307983  158374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 23:01:43.314784  158374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1222 23:01:43.321699  158374 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 23:01:43.321728  158374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 23:01:43.328397  158374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1222 23:01:43.335272  158374 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 23:01:43.335301  158374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 23:01:43.341949  158374 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 23:01:43.376034  158374 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 23:01:43.376102  158374 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 23:01:43.445165  158374 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 23:01:43.445236  158374 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1222 23:01:43.445264  158374 kubeadm.go:319] OS: Linux
	I1222 23:01:43.445301  158374 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 23:01:43.445350  158374 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 23:01:43.445392  158374 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 23:01:43.445455  158374 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 23:01:43.445494  158374 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 23:01:43.445551  158374 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 23:01:43.445588  158374 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 23:01:43.445673  158374 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 23:01:43.445736  158374 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 23:01:43.500219  158374 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 23:01:43.500396  158374 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 23:01:43.500507  158374 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 23:01:43.513368  158374 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 23:01:43.515533  158374 out.go:252]   - Generating certificates and keys ...
	I1222 23:01:43.515634  158374 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 23:01:43.515681  158374 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 23:01:43.515765  158374 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1222 23:01:43.515820  158374 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1222 23:01:43.515882  158374 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1222 23:01:43.515924  158374 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1222 23:01:43.515975  158374 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1222 23:01:43.516024  158374 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1222 23:01:43.516083  158374 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1222 23:01:43.516151  158374 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1222 23:01:43.516181  158374 kubeadm.go:319] [certs] Using the existing "sa" key
	I1222 23:01:43.516264  158374 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 23:01:43.648060  158374 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 23:01:43.775539  158374 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 23:01:43.806099  158374 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 23:01:43.912171  158374 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 23:01:44.004004  158374 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 23:01:44.004296  158374 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 23:01:44.006366  158374 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 23:01:44.008239  158374 out.go:252]   - Booting up control plane ...
	I1222 23:01:44.008308  158374 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 23:01:44.008365  158374 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 23:01:44.009041  158374 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 23:01:44.026974  158374 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 23:01:44.027088  158374 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 23:01:44.033700  158374 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 23:01:44.033947  158374 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 23:01:44.034017  158374 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 23:01:44.136722  158374 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 23:01:44.136861  158374 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 23:05:44.137086  158374 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000408574s
	I1222 23:05:44.137129  158374 kubeadm.go:319] 
	I1222 23:05:44.137190  158374 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 23:05:44.137216  158374 kubeadm.go:319] 	- The kubelet is not running
	I1222 23:05:44.137303  158374 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 23:05:44.137309  158374 kubeadm.go:319] 
	I1222 23:05:44.137438  158374 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 23:05:44.137492  158374 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 23:05:44.137528  158374 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 23:05:44.137531  158374 kubeadm.go:319] 
	I1222 23:05:44.140373  158374 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1222 23:05:44.140752  158374 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 23:05:44.140849  158374 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 23:05:44.141147  158374 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1222 23:05:44.141156  158374 kubeadm.go:319] 
	I1222 23:05:44.141230  158374 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1222 23:05:44.141360  158374 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000408574s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1222 23:05:44.141451  158374 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1222 23:05:44.555201  158374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 23:05:44.567871  158374 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 23:05:44.567915  158374 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 23:05:44.575883  158374 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 23:05:44.575897  158374 kubeadm.go:158] found existing configuration files:
	
	I1222 23:05:44.575941  158374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1222 23:05:44.583486  158374 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 23:05:44.583527  158374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 23:05:44.590649  158374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1222 23:05:44.597769  158374 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 23:05:44.597806  158374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 23:05:44.604798  158374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1222 23:05:44.611986  158374 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 23:05:44.612034  158374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 23:05:44.619193  158374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1222 23:05:44.626515  158374 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 23:05:44.626555  158374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 23:05:44.633629  158374 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 23:05:44.735033  158374 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1222 23:05:44.735554  158374 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 23:05:44.792296  158374 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 23:09:45.398895  158374 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1222 23:09:45.398970  158374 kubeadm.go:319] 
	I1222 23:09:45.399090  158374 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1222 23:09:45.401586  158374 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 23:09:45.401671  158374 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 23:09:45.401745  158374 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 23:09:45.401791  158374 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1222 23:09:45.401820  158374 kubeadm.go:319] OS: Linux
	I1222 23:09:45.401885  158374 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 23:09:45.401955  158374 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 23:09:45.402023  158374 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 23:09:45.402088  158374 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 23:09:45.402152  158374 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 23:09:45.402201  158374 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 23:09:45.402235  158374 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 23:09:45.402274  158374 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 23:09:45.402309  158374 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 23:09:45.402367  158374 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 23:09:45.402449  158374 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 23:09:45.402536  158374 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 23:09:45.402585  158374 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 23:09:45.404239  158374 out.go:252]   - Generating certificates and keys ...
	I1222 23:09:45.404310  158374 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 23:09:45.404360  158374 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 23:09:45.404421  158374 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1222 23:09:45.404472  158374 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1222 23:09:45.404530  158374 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1222 23:09:45.404569  158374 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1222 23:09:45.404650  158374 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1222 23:09:45.404705  158374 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1222 23:09:45.404761  158374 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1222 23:09:45.404827  158374 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1222 23:09:45.404867  158374 kubeadm.go:319] [certs] Using the existing "sa" key
	I1222 23:09:45.404910  158374 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 23:09:45.404948  158374 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 23:09:45.404989  158374 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 23:09:45.405029  158374 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 23:09:45.405075  158374 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 23:09:45.405115  158374 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 23:09:45.405181  158374 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 23:09:45.405240  158374 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 23:09:45.406503  158374 out.go:252]   - Booting up control plane ...
	I1222 23:09:45.406585  158374 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 23:09:45.406677  158374 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 23:09:45.406738  158374 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 23:09:45.406832  158374 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 23:09:45.406905  158374 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 23:09:45.406993  158374 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 23:09:45.407062  158374 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 23:09:45.407092  158374 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 23:09:45.407211  158374 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 23:09:45.407300  158374 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 23:09:45.407348  158374 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000888774s
	I1222 23:09:45.407350  158374 kubeadm.go:319] 
	I1222 23:09:45.407409  158374 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 23:09:45.407435  158374 kubeadm.go:319] 	- The kubelet is not running
	I1222 23:09:45.407521  158374 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 23:09:45.407524  158374 kubeadm.go:319] 
	I1222 23:09:45.407628  158374 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 23:09:45.407652  158374 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 23:09:45.407675  158374 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 23:09:45.407697  158374 kubeadm.go:319] 
	I1222 23:09:45.407753  158374 kubeadm.go:403] duration metric: took 12m4.079935698s to StartCluster
	I1222 23:09:45.407873  158374 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 23:09:45.407938  158374 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 23:09:45.445003  158374 cri.go:96] found id: ""
	I1222 23:09:45.445021  158374 logs.go:282] 0 containers: []
	W1222 23:09:45.445027  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:09:45.445038  158374 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 23:09:45.445084  158374 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 23:09:45.470772  158374 cri.go:96] found id: ""
	I1222 23:09:45.470788  158374 logs.go:282] 0 containers: []
	W1222 23:09:45.470794  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:09:45.470799  158374 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 23:09:45.470845  158374 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 23:09:45.495903  158374 cri.go:96] found id: ""
	I1222 23:09:45.495920  158374 logs.go:282] 0 containers: []
	W1222 23:09:45.495927  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:09:45.495933  158374 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 23:09:45.495983  158374 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 23:09:45.523926  158374 cri.go:96] found id: ""
	I1222 23:09:45.523943  158374 logs.go:282] 0 containers: []
	W1222 23:09:45.523952  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:09:45.523960  158374 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 23:09:45.524021  158374 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 23:09:45.551137  158374 cri.go:96] found id: ""
	I1222 23:09:45.551153  158374 logs.go:282] 0 containers: []
	W1222 23:09:45.551164  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:09:45.551171  158374 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 23:09:45.551226  158374 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 23:09:45.576565  158374 cri.go:96] found id: ""
	I1222 23:09:45.576583  158374 logs.go:282] 0 containers: []
	W1222 23:09:45.576611  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:09:45.576621  158374 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 23:09:45.576676  158374 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 23:09:45.601969  158374 cri.go:96] found id: ""
	I1222 23:09:45.601983  158374 logs.go:282] 0 containers: []
	W1222 23:09:45.601991  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:09:45.602003  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:09:45.602018  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:09:45.650169  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:09:45.650193  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:09:45.665853  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:09:45.665870  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:09:45.722796  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:09:45.715772   37272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:09:45.716296   37272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:09:45.717847   37272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:09:45.718263   37272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:09:45.719823   37272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:09:45.715772   37272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:09:45.716296   37272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:09:45.717847   37272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:09:45.718263   37272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:09:45.719823   37272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:09:45.722812  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:09:45.722824  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:09:45.751752  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:09:45.751775  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 23:09:45.780185  158374 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000888774s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1222 23:09:45.780232  158374 out.go:285] * 
	W1222 23:09:45.780327  158374 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000888774s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 23:09:45.780349  158374 out.go:285] * 
	W1222 23:09:45.780644  158374 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 23:09:45.784145  158374 out.go:203] 
	W1222 23:09:45.785152  158374 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000888774s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 23:09:45.785198  158374 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1222 23:09:45.785226  158374 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1222 23:09:45.786470  158374 out.go:203] 
	
	
	==> Docker <==
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.415844375Z" level=info msg="Loading containers: done."
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.427335656Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.427378701Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.427416508Z" level=info msg="Initializing buildkit"
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.445627752Z" level=info msg="Completed buildkit initialization"
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.450569156Z" level=info msg="Daemon has completed initialization"
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.450666768Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.450705582Z" level=info msg="API listen on /run/docker.sock"
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.450724731Z" level=info msg="API listen on [::]:2376"
	Dec 22 22:57:39 functional-384766 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 22 22:57:39 functional-384766 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 22 22:57:39 functional-384766 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 22 22:57:39 functional-384766 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 22 22:57:39 functional-384766 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Start docker client with request timeout 0s"
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Loaded network plugin cni"
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 22 22:57:39 functional-384766 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:09:48.606157   37616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:09:48.606693   37616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:09:48.608210   37616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:09:48.608705   37616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:09:48.610194   37616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff da 9e 7f a3 27 cb 08 06
	[  +0.239045] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6e eb f7 fd 0a 48 08 06
	[  +0.170967] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 16 5a dc 65 fc cc 08 06
	[Dec22 22:37] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 66 cb ee 90 55 2b 08 06
	[  +0.000450] IPv4: martian source 10.244.0.32 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff be 43 50 0c dd 15 08 06
	[  +0.000658] IPv4: martian source 10.244.0.32 from 10.244.0.7, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e 41 3c 76 23 2b 08 06
	[  +1.709294] IPv4: martian source 10.244.0.31 from 10.244.0.26, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff be b6 30 85 5f 4e 08 06
	[  +0.532867] IPv4: martian source 10.244.0.26 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff be 43 50 0c dd 15 08 06
	[Dec22 22:39] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 46 b7 49 09 f9 e0 08 06
	[  +0.006417] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1e e5 c5 4f 67 2b 08 06
	[Dec22 22:40] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 22 2e 10 70 70 25 08 06
	[Dec22 22:41] IPv4: martian source 10.244.0.1 from 10.244.0.6, on dev eth0
	[  +0.000034] ll header: 00000000: ff ff ff ff ff ff ee d7 ae 32 ba c5 08 06
	[Dec22 22:42] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 95 cb 2f 8e 91 08 06
	
	
	==> kernel <==
	 23:09:48 up  2:52,  0 user,  load average: 0.19, 0.13, 0.33
	Linux functional-384766 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 23:09:45 functional-384766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 23:09:46 functional-384766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 22 23:09:46 functional-384766 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:09:46 functional-384766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:09:46 functional-384766 kubelet[37315]: E1222 23:09:46.306432   37315 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 23:09:46 functional-384766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 23:09:46 functional-384766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 23:09:46 functional-384766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 22 23:09:46 functional-384766 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:09:47 functional-384766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:09:47 functional-384766 kubelet[37448]: E1222 23:09:47.052818   37448 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 23:09:47 functional-384766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 23:09:47 functional-384766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 23:09:47 functional-384766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 22 23:09:47 functional-384766 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:09:47 functional-384766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:09:47 functional-384766 kubelet[37476]: E1222 23:09:47.786581   37476 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 23:09:47 functional-384766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 23:09:47 functional-384766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 23:09:48 functional-384766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 324.
	Dec 22 23:09:48 functional-384766 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:09:48 functional-384766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:09:48 functional-384766 kubelet[37575]: E1222 23:09:48.541647   37575 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 23:09:48 functional-384766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 23:09:48 functional-384766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-384766 -n functional-384766
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-384766 -n functional-384766: exit status 2 (323.536373ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-384766" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth (1.76s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService
functional_test.go:2331: (dbg) Run:  kubectl --context functional-384766 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Non-zero exit: kubectl --context functional-384766 apply -f testdata/invalidsvc.yaml: exit status 1 (44.657805ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/invalidsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test.go:2333: kubectl --context functional-384766 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd (2.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 0 -p functional-384766 --alsologtostderr -v=1]
functional_test.go:934: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 logs
functional_test.go:938: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 0 -p functional-384766 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 0 -p functional-384766 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 0 -p functional-384766 --alsologtostderr -v=1] stderr:
I1222 23:10:00.686802  189221 out.go:360] Setting OutFile to fd 1 ...
I1222 23:10:00.687117  189221 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 23:10:00.687132  189221 out.go:374] Setting ErrFile to fd 2...
I1222 23:10:00.687138  189221 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 23:10:00.687423  189221 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-72233/.minikube/bin
I1222 23:10:00.687814  189221 mustload.go:66] Loading cluster: functional-384766
I1222 23:10:00.688310  189221 config.go:182] Loaded profile config "functional-384766": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
I1222 23:10:00.688903  189221 cli_runner.go:164] Run: docker container inspect functional-384766 --format={{.State.Status}}
I1222 23:10:00.708261  189221 host.go:66] Checking if "functional-384766" exists ...
I1222 23:10:00.708541  189221 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1222 23:10:00.767358  189221 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-12-22 23:10:00.757193071 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be removed
by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1222 23:10:00.767483  189221 api_server.go:166] Checking apiserver status ...
I1222 23:10:00.767535  189221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1222 23:10:00.767618  189221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
I1222 23:10:00.786301  189221 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
W1222 23:10:00.889665  189221 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1222 23:10:00.891607  189221 out.go:179] * The control-plane node functional-384766 apiserver is not running: (state=Stopped)
I1222 23:10:00.892998  189221 out.go:179]   To start a cluster, run: "minikube start -p functional-384766"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-384766
helpers_test.go:244: (dbg) docker inspect functional-384766:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c",
	        "Created": "2025-12-22T22:43:03.818900502Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 134904,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T22:43:03.847527913Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9a87e850a5e640dd3e5f71477885272b970ba271e3722be8bebbe0157f704ffd",
	        "ResolvConfPath": "/var/lib/docker/containers/e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c/hostname",
	        "HostsPath": "/var/lib/docker/containers/e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c/hosts",
	        "LogPath": "/var/lib/docker/containers/e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c/e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c-json.log",
	        "Name": "/functional-384766",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-384766:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-384766",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c",
	                "LowerDir": "/var/lib/docker/overlay2/3e3d10c0ae87018d46767d6a2bb62611a8b9a288f6938e75c60f3cd57119d4bf-init/diff:/var/lib/docker/overlay2/c57dd1a41102d99c4ed6be3c60b871435428bd2cea6a3d8d172f0a67527ba009/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3e3d10c0ae87018d46767d6a2bb62611a8b9a288f6938e75c60f3cd57119d4bf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3e3d10c0ae87018d46767d6a2bb62611a8b9a288f6938e75c60f3cd57119d4bf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3e3d10c0ae87018d46767d6a2bb62611a8b9a288f6938e75c60f3cd57119d4bf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-384766",
	                "Source": "/var/lib/docker/volumes/functional-384766/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-384766",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-384766",
	                "name.minikube.sigs.k8s.io": "functional-384766",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d6f65d275ad1e1cfaea153f23b0c094464e089c27de9a12387045fa2c863e00e",
	            "SandboxKey": "/var/run/docker/netns/d6f65d275ad1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-384766": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1b177601c4f3a252e4feb1553da3a4110e40d5b9ed2bd5de6789f2bc9f8f5c2b",
	                    "EndpointID": "2c787f98c5d836612c102f7592dc2eccfef09327c2a6cadf1319fd6559b5eca8",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "d6:90:04:78:9b:e3",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-384766",
	                        "e126b999cc06"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-384766 -n functional-384766
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-384766 -n functional-384766: exit status 2 (368.568228ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                        ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service   │ functional-384766 service list                                                                                                                     │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │                     │
	│ service   │ functional-384766 service list -o json                                                                                                             │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │                     │
	│ service   │ functional-384766 service --namespace=default --https --url hello-node                                                                             │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │                     │
	│ service   │ functional-384766 service hello-node --url --format={{.IP}}                                                                                        │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │                     │
	│ service   │ functional-384766 service hello-node --url                                                                                                         │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │                     │
	│ ssh       │ functional-384766 ssh findmnt -T /mount-9p | grep 9p                                                                                               │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │                     │
	│ mount     │ -p functional-384766 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun105456838/001:/mount-9p --alsologtostderr -v=1              │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │                     │
	│ ssh       │ functional-384766 ssh findmnt -T /mount-9p | grep 9p                                                                                               │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │ 22 Dec 25 23:09 UTC │
	│ ssh       │ functional-384766 ssh -- ls -la /mount-9p                                                                                                          │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │ 22 Dec 25 23:09 UTC │
	│ ssh       │ functional-384766 ssh cat /mount-9p/test-1766444997145080367                                                                                       │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │ 22 Dec 25 23:09 UTC │
	│ ssh       │ functional-384766 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                   │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │                     │
	│ ssh       │ functional-384766 ssh sudo umount -f /mount-9p                                                                                                     │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │ 22 Dec 25 23:09 UTC │
	│ mount     │ -p functional-384766 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun683538616/001:/mount-9p --alsologtostderr -v=1 --port 39301 │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │                     │
	│ ssh       │ functional-384766 ssh findmnt -T /mount-9p | grep 9p                                                                                               │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │                     │
	│ start     │ -p functional-384766 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-rc.1      │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:10 UTC │                     │
	│ start     │ -p functional-384766 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-rc.1      │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:10 UTC │                     │
	│ start     │ -p functional-384766 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-rc.1                │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:10 UTC │                     │
	│ ssh       │ functional-384766 ssh findmnt -T /mount-9p | grep 9p                                                                                               │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:10 UTC │ 22 Dec 25 23:10 UTC │
	│ dashboard │ --url --port 0 -p functional-384766 --alsologtostderr -v=1                                                                                         │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:10 UTC │                     │
	│ ssh       │ functional-384766 ssh -- ls -la /mount-9p                                                                                                          │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:10 UTC │ 22 Dec 25 23:10 UTC │
	│ ssh       │ functional-384766 ssh sudo umount -f /mount-9p                                                                                                     │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:10 UTC │                     │
	│ mount     │ -p functional-384766 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1192915267/001:/mount3 --alsologtostderr -v=1               │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:10 UTC │                     │
	│ mount     │ -p functional-384766 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1192915267/001:/mount1 --alsologtostderr -v=1               │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:10 UTC │                     │
	│ ssh       │ functional-384766 ssh findmnt -T /mount1                                                                                                           │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:10 UTC │                     │
	│ mount     │ -p functional-384766 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1192915267/001:/mount2 --alsologtostderr -v=1               │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:10 UTC │                     │
	└───────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 23:10:00
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 23:10:00.471084  189065 out.go:360] Setting OutFile to fd 1 ...
	I1222 23:10:00.471344  189065 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 23:10:00.471353  189065 out.go:374] Setting ErrFile to fd 2...
	I1222 23:10:00.471358  189065 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 23:10:00.471534  189065 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-72233/.minikube/bin
	I1222 23:10:00.471985  189065 out.go:368] Setting JSON to false
	I1222 23:10:00.472909  189065 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":10340,"bootTime":1766434660,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1222 23:10:00.472959  189065 start.go:143] virtualization: kvm guest
	I1222 23:10:00.474587  189065 out.go:179] * [functional-384766] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1222 23:10:00.475694  189065 out.go:179]   - MINIKUBE_LOCATION=22301
	I1222 23:10:00.475703  189065 notify.go:221] Checking for updates...
	I1222 23:10:00.477739  189065 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 23:10:00.478849  189065 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1222 23:10:00.479809  189065 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-72233/.minikube
	I1222 23:10:00.480818  189065 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1222 23:10:00.482103  189065 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 23:10:00.483637  189065 config.go:182] Loaded profile config "functional-384766": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1222 23:10:00.484147  189065 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 23:10:00.508451  189065 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1222 23:10:00.508633  189065 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 23:10:00.567029  189065 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-12-22 23:10:00.557282865 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1222 23:10:00.567130  189065 docker.go:319] overlay module found
	I1222 23:10:00.568775  189065 out.go:179] * Using the docker driver based on existing profile
	I1222 23:10:00.569819  189065 start.go:309] selected driver: docker
	I1222 23:10:00.569832  189065 start.go:928] validating driver "docker" against &{Name:functional-384766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-384766 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 23:10:00.569903  189065 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 23:10:00.570021  189065 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 23:10:00.622749  189065 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-12-22 23:10:00.61321899 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be removed
by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1222 23:10:00.623382  189065 cni.go:84] Creating CNI manager for ""
	I1222 23:10:00.623470  189065 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1222 23:10:00.623528  189065 start.go:353] cluster config:
	{Name:functional-384766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-384766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 23:10:00.625075  189065 out.go:179] * dry-run validation complete!
	
	
	==> Docker <==
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.415844375Z" level=info msg="Loading containers: done."
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.427335656Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.427378701Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.427416508Z" level=info msg="Initializing buildkit"
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.445627752Z" level=info msg="Completed buildkit initialization"
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.450569156Z" level=info msg="Daemon has completed initialization"
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.450666768Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.450705582Z" level=info msg="API listen on /run/docker.sock"
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.450724731Z" level=info msg="API listen on [::]:2376"
	Dec 22 22:57:39 functional-384766 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 22 22:57:39 functional-384766 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 22 22:57:39 functional-384766 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 22 22:57:39 functional-384766 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 22 22:57:39 functional-384766 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Start docker client with request timeout 0s"
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Loaded network plugin cni"
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 22 22:57:39 functional-384766 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:10:02.512696   39612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:10:02.513167   39612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:10:02.514766   39612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:10:02.515269   39612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:10:02.516842   39612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff da 9e 7f a3 27 cb 08 06
	[  +0.239045] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6e eb f7 fd 0a 48 08 06
	[  +0.170967] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 16 5a dc 65 fc cc 08 06
	[Dec22 22:37] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 66 cb ee 90 55 2b 08 06
	[  +0.000450] IPv4: martian source 10.244.0.32 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff be 43 50 0c dd 15 08 06
	[  +0.000658] IPv4: martian source 10.244.0.32 from 10.244.0.7, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e 41 3c 76 23 2b 08 06
	[  +1.709294] IPv4: martian source 10.244.0.31 from 10.244.0.26, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff be b6 30 85 5f 4e 08 06
	[  +0.532867] IPv4: martian source 10.244.0.26 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff be 43 50 0c dd 15 08 06
	[Dec22 22:39] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 46 b7 49 09 f9 e0 08 06
	[  +0.006417] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1e e5 c5 4f 67 2b 08 06
	[Dec22 22:40] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 22 2e 10 70 70 25 08 06
	[Dec22 22:41] IPv4: martian source 10.244.0.1 from 10.244.0.6, on dev eth0
	[  +0.000034] ll header: 00000000: ff ff ff ff ff ff ee d7 ae 32 ba c5 08 06
	[Dec22 22:42] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 95 cb 2f 8e 91 08 06
	
	
	==> kernel <==
	 23:10:02 up  2:52,  0 user,  load average: 0.76, 0.25, 0.37
	Linux functional-384766 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 23:09:59 functional-384766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 23:09:59 functional-384766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 339.
	Dec 22 23:09:59 functional-384766 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:09:59 functional-384766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:09:59 functional-384766 kubelet[39179]: E1222 23:09:59.763241   39179 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 23:09:59 functional-384766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 23:09:59 functional-384766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 23:10:00 functional-384766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 340.
	Dec 22 23:10:00 functional-384766 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:10:00 functional-384766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:10:00 functional-384766 kubelet[39237]: E1222 23:10:00.543824   39237 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 23:10:00 functional-384766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 23:10:00 functional-384766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 23:10:01 functional-384766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 341.
	Dec 22 23:10:01 functional-384766 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:10:01 functional-384766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:10:01 functional-384766 kubelet[39325]: E1222 23:10:01.284811   39325 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 23:10:01 functional-384766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 23:10:01 functional-384766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 23:10:01 functional-384766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 342.
	Dec 22 23:10:01 functional-384766 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:10:01 functional-384766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:10:01 functional-384766 kubelet[39436]: E1222 23:10:01.981193   39436 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 23:10:01 functional-384766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 23:10:01 functional-384766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-384766 -n functional-384766
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-384766 -n functional-384766: exit status 2 (314.971746ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-384766" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd (2.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd (2.71s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 status
functional_test.go:869: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-384766 status: exit status 2 (299.98834ms)

                                                
                                                
-- stdout --
	functional-384766
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
functional_test.go:871: failed to run minikube status. args "out/minikube-linux-amd64 -p functional-384766 status" : exit status 2
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:875: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-384766 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 2 (289.995032ms)

                                                
                                                
-- stdout --
	host:Running,kublet:Stopped,apiserver:Stopped,kubeconfig:Configured

                                                
                                                
-- /stdout --
functional_test.go:877: failed to run minikube status with custom format: args "out/minikube-linux-amd64 -p functional-384766 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 2
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 status -o json
functional_test.go:887: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-384766 status -o json: exit status 2 (317.042413ms)

                                                
                                                
-- stdout --
	{"Name":"functional-384766","Host":"Running","Kubelet":"Running","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:889: failed to run minikube status with json output. args "out/minikube-linux-amd64 -p functional-384766 status -o json" : exit status 2
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-384766
helpers_test.go:244: (dbg) docker inspect functional-384766:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c",
	        "Created": "2025-12-22T22:43:03.818900502Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 134904,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T22:43:03.847527913Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9a87e850a5e640dd3e5f71477885272b970ba271e3722be8bebbe0157f704ffd",
	        "ResolvConfPath": "/var/lib/docker/containers/e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c/hostname",
	        "HostsPath": "/var/lib/docker/containers/e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c/hosts",
	        "LogPath": "/var/lib/docker/containers/e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c/e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c-json.log",
	        "Name": "/functional-384766",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-384766:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-384766",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c",
	                "LowerDir": "/var/lib/docker/overlay2/3e3d10c0ae87018d46767d6a2bb62611a8b9a288f6938e75c60f3cd57119d4bf-init/diff:/var/lib/docker/overlay2/c57dd1a41102d99c4ed6be3c60b871435428bd2cea6a3d8d172f0a67527ba009/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3e3d10c0ae87018d46767d6a2bb62611a8b9a288f6938e75c60f3cd57119d4bf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3e3d10c0ae87018d46767d6a2bb62611a8b9a288f6938e75c60f3cd57119d4bf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3e3d10c0ae87018d46767d6a2bb62611a8b9a288f6938e75c60f3cd57119d4bf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-384766",
	                "Source": "/var/lib/docker/volumes/functional-384766/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-384766",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-384766",
	                "name.minikube.sigs.k8s.io": "functional-384766",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d6f65d275ad1e1cfaea153f23b0c094464e089c27de9a12387045fa2c863e00e",
	            "SandboxKey": "/var/run/docker/netns/d6f65d275ad1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-384766": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1b177601c4f3a252e4feb1553da3a4110e40d5b9ed2bd5de6789f2bc9f8f5c2b",
	                    "EndpointID": "2c787f98c5d836612c102f7592dc2eccfef09327c2a6cadf1319fd6559b5eca8",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "d6:90:04:78:9b:e3",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-384766",
	                        "e126b999cc06"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-384766 -n functional-384766
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-384766 -n functional-384766: exit status 2 (306.273675ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                             ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ functional-384766 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                                                           │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │ 22 Dec 25 23:09 UTC │
	│ ssh     │ functional-384766 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                     │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │ 22 Dec 25 23:09 UTC │
	│ ssh     │ functional-384766 ssh -n functional-384766 sudo cat /home/docker/cp-test.txt                                                                                 │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │ 22 Dec 25 23:09 UTC │
	│ ssh     │ functional-384766 ssh sudo cat /etc/ssl/certs/758032.pem                                                                                                     │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │ 22 Dec 25 23:09 UTC │
	│ cp      │ functional-384766 cp functional-384766:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelCpCm3180693308/001/cp-test.txt │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │ 22 Dec 25 23:09 UTC │
	│ ssh     │ functional-384766 ssh sudo cat /usr/share/ca-certificates/758032.pem                                                                                         │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │ 22 Dec 25 23:09 UTC │
	│ ssh     │ functional-384766 ssh -n functional-384766 sudo cat /home/docker/cp-test.txt                                                                                 │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │ 22 Dec 25 23:09 UTC │
	│ ssh     │ functional-384766 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                     │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │ 22 Dec 25 23:09 UTC │
	│ cp      │ functional-384766 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                    │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │ 22 Dec 25 23:09 UTC │
	│ ssh     │ functional-384766 ssh echo hello                                                                                                                             │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │ 22 Dec 25 23:09 UTC │
	│ ssh     │ functional-384766 ssh -n functional-384766 sudo cat /tmp/does/not/exist/cp-test.txt                                                                          │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │ 22 Dec 25 23:09 UTC │
	│ tunnel  │ functional-384766 tunnel --alsologtostderr                                                                                                                   │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │                     │
	│ tunnel  │ functional-384766 tunnel --alsologtostderr                                                                                                                   │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │                     │
	│ addons  │ functional-384766 addons list                                                                                                                                │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │ 22 Dec 25 23:09 UTC │
	│ tunnel  │ functional-384766 tunnel --alsologtostderr                                                                                                                   │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │                     │
	│ addons  │ functional-384766 addons list -o json                                                                                                                        │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │ 22 Dec 25 23:09 UTC │
	│ service │ functional-384766 service list                                                                                                                               │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │                     │
	│ service │ functional-384766 service list -o json                                                                                                                       │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │                     │
	│ service │ functional-384766 service --namespace=default --https --url hello-node                                                                                       │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │                     │
	│ service │ functional-384766 service hello-node --url --format={{.IP}}                                                                                                  │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │                     │
	│ service │ functional-384766 service hello-node --url                                                                                                                   │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │                     │
	│ ssh     │ functional-384766 ssh findmnt -T /mount-9p | grep 9p                                                                                                         │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │                     │
	│ mount   │ -p functional-384766 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun105456838/001:/mount-9p --alsologtostderr -v=1                        │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │                     │
	│ ssh     │ functional-384766 ssh findmnt -T /mount-9p | grep 9p                                                                                                         │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │ 22 Dec 25 23:09 UTC │
	│ ssh     │ functional-384766 ssh -- ls -la /mount-9p                                                                                                                    │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 22:57:36
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 22:57:36.254392  158374 out.go:360] Setting OutFile to fd 1 ...
	I1222 22:57:36.254700  158374 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 22:57:36.254705  158374 out.go:374] Setting ErrFile to fd 2...
	I1222 22:57:36.254708  158374 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 22:57:36.254883  158374 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-72233/.minikube/bin
	I1222 22:57:36.255420  158374 out.go:368] Setting JSON to false
	I1222 22:57:36.256374  158374 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":9596,"bootTime":1766434660,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1222 22:57:36.256458  158374 start.go:143] virtualization: kvm guest
	I1222 22:57:36.258393  158374 out.go:179] * [functional-384766] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1222 22:57:36.259562  158374 out.go:179]   - MINIKUBE_LOCATION=22301
	I1222 22:57:36.259584  158374 notify.go:221] Checking for updates...
	I1222 22:57:36.261710  158374 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 22:57:36.262944  158374 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1222 22:57:36.264212  158374 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-72233/.minikube
	I1222 22:57:36.265355  158374 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1222 22:57:36.266271  158374 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 22:57:36.267661  158374 config.go:182] Loaded profile config "functional-384766": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1222 22:57:36.267820  158374 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 22:57:36.296187  158374 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1222 22:57:36.296285  158374 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 22:57:36.350829  158374 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:57 SystemTime:2025-12-22 22:57:36.341376778 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1222 22:57:36.350930  158374 docker.go:319] overlay module found
	I1222 22:57:36.352570  158374 out.go:179] * Using the docker driver based on existing profile
	I1222 22:57:36.353588  158374 start.go:309] selected driver: docker
	I1222 22:57:36.353611  158374 start.go:928] validating driver "docker" against &{Name:functional-384766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-384766 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 22:57:36.353719  158374 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 22:57:36.353830  158374 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 22:57:36.406492  158374 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:57 SystemTime:2025-12-22 22:57:36.397760538 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1222 22:57:36.407140  158374 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1222 22:57:36.407175  158374 cni.go:84] Creating CNI manager for ""
	I1222 22:57:36.407232  158374 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1222 22:57:36.407286  158374 start.go:353] cluster config:
	{Name:functional-384766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-384766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 22:57:36.408996  158374 out.go:179] * Starting "functional-384766" primary control-plane node in "functional-384766" cluster
	I1222 22:57:36.410078  158374 cache.go:134] Beginning downloading kic base image for docker with docker
	I1222 22:57:36.411119  158374 out.go:179] * Pulling base image v0.0.48-1766394456-22288 ...
	I1222 22:57:36.412129  158374 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1222 22:57:36.412159  158374 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4
	I1222 22:57:36.412174  158374 cache.go:65] Caching tarball of preloaded images
	I1222 22:57:36.412242  158374 preload.go:251] Found /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1222 22:57:36.412248  158374 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on docker
	I1222 22:57:36.412244  158374 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local docker daemon
	I1222 22:57:36.412341  158374 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/config.json ...
	I1222 22:57:36.431941  158374 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local docker daemon, skipping pull
	I1222 22:57:36.431955  158374 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 exists in daemon, skipping load
	I1222 22:57:36.431969  158374 cache.go:243] Successfully downloaded all kic artifacts
	I1222 22:57:36.431996  158374 start.go:360] acquireMachinesLock for functional-384766: {Name:mk956fe60c71d3d96aa218ecf73d6e39f6ab1bf3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 22:57:36.432059  158374 start.go:364] duration metric: took 40.356µs to acquireMachinesLock for "functional-384766"
	I1222 22:57:36.432072  158374 start.go:96] Skipping create...Using existing machine configuration
	I1222 22:57:36.432076  158374 fix.go:54] fixHost starting: 
	I1222 22:57:36.432265  158374 cli_runner.go:164] Run: docker container inspect functional-384766 --format={{.State.Status}}
	I1222 22:57:36.449079  158374 fix.go:112] recreateIfNeeded on functional-384766: state=Running err=<nil>
	W1222 22:57:36.449100  158374 fix.go:138] unexpected machine state, will restart: <nil>
	I1222 22:57:36.450671  158374 out.go:252] * Updating the running docker "functional-384766" container ...
	I1222 22:57:36.450705  158374 machine.go:94] provisionDockerMachine start ...
	I1222 22:57:36.450764  158374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:57:36.467607  158374 main.go:144] libmachine: Using SSH client type: native
	I1222 22:57:36.467835  158374 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:57:36.467841  158374 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 22:57:36.608433  158374 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-384766
	
	I1222 22:57:36.608449  158374 ubuntu.go:182] provisioning hostname "functional-384766"
	I1222 22:57:36.608504  158374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:57:36.626300  158374 main.go:144] libmachine: Using SSH client type: native
	I1222 22:57:36.626509  158374 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:57:36.626516  158374 main.go:144] libmachine: About to run SSH command:
	sudo hostname functional-384766 && echo "functional-384766" | sudo tee /etc/hostname
	I1222 22:57:36.777413  158374 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-384766
	
	I1222 22:57:36.777486  158374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:57:36.795160  158374 main.go:144] libmachine: Using SSH client type: native
	I1222 22:57:36.795380  158374 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:57:36.795396  158374 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-384766' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-384766/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-384766' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 22:57:36.935922  158374 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 22:57:36.935942  158374 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22301-72233/.minikube CaCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22301-72233/.minikube}
	I1222 22:57:36.935957  158374 ubuntu.go:190] setting up certificates
	I1222 22:57:36.935965  158374 provision.go:84] configureAuth start
	I1222 22:57:36.936023  158374 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-384766
	I1222 22:57:36.954219  158374 provision.go:143] copyHostCerts
	I1222 22:57:36.954277  158374 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem, removing ...
	I1222 22:57:36.954291  158374 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem
	I1222 22:57:36.954367  158374 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem (1082 bytes)
	I1222 22:57:36.954466  158374 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem, removing ...
	I1222 22:57:36.954469  158374 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem
	I1222 22:57:36.954495  158374 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem (1123 bytes)
	I1222 22:57:36.954569  158374 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem, removing ...
	I1222 22:57:36.954572  158374 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem
	I1222 22:57:36.954631  158374 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem (1679 bytes)
	I1222 22:57:36.954687  158374 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem org=jenkins.functional-384766 san=[127.0.0.1 192.168.49.2 functional-384766 localhost minikube]
	I1222 22:57:36.981147  158374 provision.go:177] copyRemoteCerts
	I1222 22:57:36.981202  158374 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 22:57:36.981239  158374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:57:37.000716  158374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
	I1222 22:57:37.101499  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 22:57:37.118740  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 22:57:37.135018  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1222 22:57:37.151214  158374 provision.go:87] duration metric: took 215.234679ms to configureAuth
	I1222 22:57:37.151234  158374 ubuntu.go:206] setting minikube options for container-runtime
	I1222 22:57:37.151390  158374 config.go:182] Loaded profile config "functional-384766": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1222 22:57:37.151430  158374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:57:37.168491  158374 main.go:144] libmachine: Using SSH client type: native
	I1222 22:57:37.168730  158374 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:57:37.168737  158374 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1222 22:57:37.310361  158374 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1222 22:57:37.310376  158374 ubuntu.go:71] root file system type: overlay
	I1222 22:57:37.310489  158374 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1222 22:57:37.310547  158374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:57:37.329095  158374 main.go:144] libmachine: Using SSH client type: native
	I1222 22:57:37.329306  158374 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:57:37.329369  158374 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1222 22:57:37.478917  158374 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1222 22:57:37.478994  158374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:57:37.496454  158374 main.go:144] libmachine: Using SSH client type: native
	I1222 22:57:37.496687  158374 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:57:37.496699  158374 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1222 22:57:37.641628  158374 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 22:57:37.641654  158374 machine.go:97] duration metric: took 1.190941144s to provisionDockerMachine
	I1222 22:57:37.641665  158374 start.go:293] postStartSetup for "functional-384766" (driver="docker")
	I1222 22:57:37.641676  158374 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 22:57:37.641727  158374 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 22:57:37.641757  158374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:57:37.659069  158374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
	I1222 22:57:37.759899  158374 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 22:57:37.763912  158374 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 22:57:37.763929  158374 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 22:57:37.763939  158374 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-72233/.minikube/addons for local assets ...
	I1222 22:57:37.763985  158374 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-72233/.minikube/files for local assets ...
	I1222 22:57:37.764057  158374 filesync.go:149] local asset: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem -> 758032.pem in /etc/ssl/certs
	I1222 22:57:37.764125  158374 filesync.go:149] local asset: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/test/nested/copy/75803/hosts -> hosts in /etc/test/nested/copy/75803
	I1222 22:57:37.764158  158374 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/75803
	I1222 22:57:37.772288  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem --> /etc/ssl/certs/758032.pem (1708 bytes)
	I1222 22:57:37.789657  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/test/nested/copy/75803/hosts --> /etc/test/nested/copy/75803/hosts (40 bytes)
	I1222 22:57:37.805946  158374 start.go:296] duration metric: took 164.267669ms for postStartSetup
	I1222 22:57:37.806019  158374 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 22:57:37.806054  158374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:57:37.823397  158374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
	I1222 22:57:37.920964  158374 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 22:57:37.925567  158374 fix.go:56] duration metric: took 1.493483875s for fixHost
	I1222 22:57:37.925585  158374 start.go:83] releasing machines lock for "functional-384766", held for 1.493518865s
	I1222 22:57:37.925676  158374 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-384766
	I1222 22:57:37.944340  158374 ssh_runner.go:195] Run: cat /version.json
	I1222 22:57:37.944379  158374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:57:37.944410  158374 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 22:57:37.944475  158374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:57:37.962270  158374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
	I1222 22:57:37.963480  158374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
	I1222 22:57:38.111745  158374 ssh_runner.go:195] Run: systemctl --version
	I1222 22:57:38.118245  158374 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 22:57:38.122628  158374 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 22:57:38.122679  158374 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 22:57:38.130349  158374 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1222 22:57:38.130362  158374 start.go:496] detecting cgroup driver to use...
	I1222 22:57:38.130390  158374 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 22:57:38.130482  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 22:57:38.143844  158374 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1222 22:57:38.152204  158374 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1222 22:57:38.160833  158374 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1222 22:57:38.160878  158374 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1222 22:57:38.168944  158374 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 22:57:38.176827  158374 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1222 22:57:38.185035  158374 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 22:57:38.193068  158374 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 22:57:38.200733  158374 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1222 22:57:38.208877  158374 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1222 22:57:38.217062  158374 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1222 22:57:38.225212  158374 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 22:57:38.231954  158374 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 22:57:38.238562  158374 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 22:57:38.319900  158374 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1222 22:57:38.394735  158374 start.go:496] detecting cgroup driver to use...
	I1222 22:57:38.394777  158374 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 22:57:38.394829  158374 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1222 22:57:38.408181  158374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 22:57:38.420724  158374 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1222 22:57:38.437862  158374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 22:57:38.450387  158374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1222 22:57:38.462197  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 22:57:38.475419  158374 ssh_runner.go:195] Run: which cri-dockerd
	I1222 22:57:38.478805  158374 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1222 22:57:38.485878  158374 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1222 22:57:38.497638  158374 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1222 22:57:38.579501  158374 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1222 22:57:38.662636  158374 docker.go:578] configuring docker to use "cgroupfs" as cgroup driver...
	I1222 22:57:38.662750  158374 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1222 22:57:38.675412  158374 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1222 22:57:38.686668  158374 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 22:57:38.767093  158374 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1222 22:57:39.452892  158374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 22:57:39.465276  158374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1222 22:57:39.477001  158374 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1222 22:57:39.491722  158374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1222 22:57:39.503501  158374 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1222 22:57:39.584904  158374 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1222 22:57:39.672762  158374 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 22:57:39.748726  158374 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1222 22:57:39.768653  158374 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1222 22:57:39.780104  158374 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 22:57:39.862790  158374 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1222 22:57:39.934384  158374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1222 22:57:39.948030  158374 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1222 22:57:39.948084  158374 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1222 22:57:39.952002  158374 start.go:564] Will wait 60s for crictl version
	I1222 22:57:39.952049  158374 ssh_runner.go:195] Run: which crictl
	I1222 22:57:39.955397  158374 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 22:57:39.979213  158374 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1222 22:57:39.979270  158374 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1222 22:57:40.004367  158374 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1222 22:57:40.031792  158374 out.go:252] * Preparing Kubernetes v1.35.0-rc.1 on Docker 29.1.3 ...
	I1222 22:57:40.031863  158374 cli_runner.go:164] Run: docker network inspect functional-384766 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 22:57:40.047933  158374 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1222 22:57:40.053698  158374 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1222 22:57:40.054726  158374 kubeadm.go:884] updating cluster {Name:functional-384766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-384766 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1222 22:57:40.054846  158374 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1222 22:57:40.054890  158374 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1222 22:57:40.076020  158374 docker.go:694] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-384766
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1222 22:57:40.076046  158374 docker.go:624] Images already preloaded, skipping extraction
	I1222 22:57:40.076111  158374 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1222 22:57:40.096347  158374 docker.go:694] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-384766
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1222 22:57:40.096366  158374 cache_images.go:86] Images are preloaded, skipping loading
	I1222 22:57:40.096374  158374 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 docker true true} ...
	I1222 22:57:40.096468  158374 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-384766 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-384766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 22:57:40.096517  158374 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1222 22:57:40.147179  158374 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1222 22:57:40.147206  158374 cni.go:84] Creating CNI manager for ""
	I1222 22:57:40.147226  158374 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1222 22:57:40.147236  158374 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 22:57:40.147256  158374 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-384766 NodeName:functional-384766 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 22:57:40.147375  158374 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-384766"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 22:57:40.147436  158374 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 22:57:40.155394  158374 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 22:57:40.155439  158374 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 22:57:40.163036  158374 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1222 22:57:40.175169  158374 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 22:57:40.187093  158374 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2073 bytes)
	I1222 22:57:40.198818  158374 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1222 22:57:40.202222  158374 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 22:57:40.283126  158374 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 22:57:40.747886  158374 certs.go:69] Setting up /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766 for IP: 192.168.49.2
	I1222 22:57:40.747899  158374 certs.go:195] generating shared ca certs ...
	I1222 22:57:40.747914  158374 certs.go:227] acquiring lock for ca certs: {Name:mk952cc8302daab7c0050aedd5db4002f6808128 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 22:57:40.748072  158374 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.key
	I1222 22:57:40.748113  158374 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.key
	I1222 22:57:40.748119  158374 certs.go:257] generating profile certs ...
	I1222 22:57:40.748199  158374 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.key
	I1222 22:57:40.748236  158374 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/apiserver.key.c9e079a8
	I1222 22:57:40.748278  158374 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/proxy-client.key
	I1222 22:57:40.748397  158374 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803.pem (1338 bytes)
	W1222 22:57:40.748423  158374 certs.go:480] ignoring /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803_empty.pem, impossibly tiny 0 bytes
	I1222 22:57:40.748429  158374 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem (1675 bytes)
	I1222 22:57:40.748451  158374 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem (1082 bytes)
	I1222 22:57:40.748470  158374 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem (1123 bytes)
	I1222 22:57:40.748489  158374 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem (1679 bytes)
	I1222 22:57:40.748525  158374 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem (1708 bytes)
	I1222 22:57:40.749053  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 22:57:40.768237  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 22:57:40.787559  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 22:57:40.804276  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 22:57:40.820613  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 22:57:40.836790  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 22:57:40.852839  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 22:57:40.869050  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1222 22:57:40.885231  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803.pem --> /usr/share/ca-certificates/75803.pem (1338 bytes)
	I1222 22:57:40.901347  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem --> /usr/share/ca-certificates/758032.pem (1708 bytes)
	I1222 22:57:40.917338  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 22:57:40.933332  158374 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 22:57:40.944903  158374 ssh_runner.go:195] Run: openssl version
	I1222 22:57:40.950515  158374 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/758032.pem
	I1222 22:57:40.957071  158374 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/758032.pem /etc/ssl/certs/758032.pem
	I1222 22:57:40.963749  158374 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/758032.pem
	I1222 22:57:40.966999  158374 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 22:42 /usr/share/ca-certificates/758032.pem
	I1222 22:57:40.967032  158374 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/758032.pem
	I1222 22:57:41.000342  158374 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 22:57:41.007579  158374 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 22:57:41.014450  158374 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 22:57:41.021442  158374 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 22:57:41.024853  158374 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 22:33 /usr/share/ca-certificates/minikubeCA.pem
	I1222 22:57:41.024902  158374 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 22:57:41.058138  158374 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 22:57:41.065135  158374 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/75803.pem
	I1222 22:57:41.071858  158374 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/75803.pem /etc/ssl/certs/75803.pem
	I1222 22:57:41.078672  158374 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75803.pem
	I1222 22:57:41.082051  158374 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 22:42 /usr/share/ca-certificates/75803.pem
	I1222 22:57:41.082083  158374 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75803.pem
	I1222 22:57:41.115012  158374 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 22:57:41.122326  158374 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 22:57:41.125872  158374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1222 22:57:41.158840  158374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1222 22:57:41.191689  158374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1222 22:57:41.224669  158374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1222 22:57:41.258802  158374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1222 22:57:41.292531  158374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1222 22:57:41.327828  158374 kubeadm.go:401] StartCluster: {Name:functional-384766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-384766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 22:57:41.327941  158374 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1222 22:57:41.347229  158374 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 22:57:41.355058  158374 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1222 22:57:41.355067  158374 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1222 22:57:41.355102  158374 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1222 22:57:41.362198  158374 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1222 22:57:41.362672  158374 kubeconfig.go:125] found "functional-384766" server: "https://192.168.49.2:8441"
	I1222 22:57:41.363809  158374 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1222 22:57:41.371022  158374 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-22 22:43:13.034628184 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-22 22:57:40.197478933 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1222 22:57:41.371029  158374 kubeadm.go:1161] stopping kube-system containers ...
	I1222 22:57:41.371066  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1222 22:57:41.389715  158374 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1222 22:57:41.415695  158374 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 22:57:41.423304  158374 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec 22 22:47 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 22 22:47 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 22 22:47 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 22 22:47 /etc/kubernetes/scheduler.conf
	
	I1222 22:57:41.423364  158374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1222 22:57:41.430717  158374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1222 22:57:41.437811  158374 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1222 22:57:41.437848  158374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 22:57:41.444879  158374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1222 22:57:41.452191  158374 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1222 22:57:41.452233  158374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 22:57:41.459225  158374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1222 22:57:41.466383  158374 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1222 22:57:41.466418  158374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 22:57:41.473427  158374 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 22:57:41.480724  158374 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1222 22:57:41.518575  158374 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1222 22:57:41.974225  158374 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1222 22:57:42.135961  158374 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1222 22:57:42.183844  158374 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1222 22:57:42.223254  158374 api_server.go:52] waiting for apiserver process to appear ...
	I1222 22:57:42.223318  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:42.723474  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:43.223549  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:43.724244  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:44.223849  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:44.724026  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:45.223499  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:45.723832  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:46.223744  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:46.723529  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:47.224208  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:47.723932  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:48.223584  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:48.724285  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:49.224200  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:49.723863  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:50.223734  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:50.724424  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:51.224429  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:51.724246  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:52.223705  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:52.724265  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:53.223569  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:53.724236  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:54.224306  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:54.724058  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:55.223766  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:55.724443  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:56.224475  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:56.724242  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:57.223801  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:57.723648  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:58.224456  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:58.724111  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:59.224088  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:59.723871  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:00.223787  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:00.723546  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:01.224166  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:01.723833  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:02.224349  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:02.724090  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:03.223629  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:03.724404  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:04.223783  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:04.724330  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:05.223503  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:05.724088  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:06.224003  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:06.724375  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:07.223508  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:07.724225  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:08.224384  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:08.724300  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:09.224073  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:09.723806  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:10.223613  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:10.724450  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:11.224438  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:11.724384  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:12.224307  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:12.723407  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:13.224265  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:13.724010  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:14.223745  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:14.723548  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:15.223894  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:15.723495  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:16.223471  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:16.724428  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:17.224173  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:17.723800  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:18.223536  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:18.724376  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:19.224018  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:19.723833  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:20.223461  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:20.723797  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:21.223581  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:21.723470  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:22.224221  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:22.723735  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:23.223505  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:23.723726  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:24.223516  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:24.724413  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:25.223835  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:25.723691  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:26.223672  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:26.723588  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:27.223568  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:27.723458  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:28.224226  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:28.724079  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:29.223830  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:29.723697  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:30.224456  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:30.724136  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:31.223750  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:31.723578  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:32.223414  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:32.724025  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:33.224291  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:33.724443  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:34.224315  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:34.724019  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:35.223687  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:35.723472  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:36.224212  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:36.724077  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:37.223750  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:37.723438  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:38.223515  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:38.723503  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:39.224337  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:39.723503  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:40.224133  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:40.723853  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:41.223668  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:41.723695  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:42.223527  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:58:42.242498  158374 logs.go:282] 0 containers: []
	W1222 22:58:42.242530  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:58:42.242576  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:58:42.263682  158374 logs.go:282] 0 containers: []
	W1222 22:58:42.263696  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:58:42.263747  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:58:42.284235  158374 logs.go:282] 0 containers: []
	W1222 22:58:42.284250  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:58:42.284330  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:58:42.303204  158374 logs.go:282] 0 containers: []
	W1222 22:58:42.303219  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:58:42.303263  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:58:42.321387  158374 logs.go:282] 0 containers: []
	W1222 22:58:42.321404  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:58:42.321461  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:58:42.340277  158374 logs.go:282] 0 containers: []
	W1222 22:58:42.340290  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:58:42.340333  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:58:42.359009  158374 logs.go:282] 0 containers: []
	W1222 22:58:42.359025  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:58:42.359034  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:58:42.359044  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:58:42.407304  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:58:42.407323  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:58:42.423167  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:58:42.423184  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:58:42.478018  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:58:42.471043   19939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:42.471587   19939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:42.473144   19939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:42.473549   19939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:42.474977   19939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:58:42.471043   19939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:42.471587   19939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:42.473144   19939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:42.473549   19939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:42.474977   19939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:58:42.478032  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:58:42.478050  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:58:42.508140  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:58:42.508159  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:58:45.047948  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:45.058851  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:58:45.078438  158374 logs.go:282] 0 containers: []
	W1222 22:58:45.078457  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:58:45.078506  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:58:45.096664  158374 logs.go:282] 0 containers: []
	W1222 22:58:45.096678  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:58:45.096729  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:58:45.114982  158374 logs.go:282] 0 containers: []
	W1222 22:58:45.114995  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:58:45.115033  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:58:45.132907  158374 logs.go:282] 0 containers: []
	W1222 22:58:45.132920  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:58:45.132960  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:58:45.151352  158374 logs.go:282] 0 containers: []
	W1222 22:58:45.151368  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:58:45.151409  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:58:45.169708  158374 logs.go:282] 0 containers: []
	W1222 22:58:45.169725  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:58:45.169767  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:58:45.187775  158374 logs.go:282] 0 containers: []
	W1222 22:58:45.187790  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:58:45.187802  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:58:45.187814  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:58:45.242776  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:58:45.235974   20088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:45.236524   20088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:45.238023   20088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:45.238438   20088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:45.239937   20088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:58:45.235974   20088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:45.236524   20088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:45.238023   20088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:45.238438   20088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:45.239937   20088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:58:45.242790  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:58:45.242800  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:58:45.273873  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:58:45.273892  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:58:45.303522  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:58:45.303541  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:58:45.351682  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:58:45.351702  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:58:47.869586  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:47.880760  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:58:47.899543  158374 logs.go:282] 0 containers: []
	W1222 22:58:47.899560  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:58:47.899617  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:58:47.917954  158374 logs.go:282] 0 containers: []
	W1222 22:58:47.917970  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:58:47.918017  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:58:47.936207  158374 logs.go:282] 0 containers: []
	W1222 22:58:47.936224  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:58:47.936269  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:58:47.954310  158374 logs.go:282] 0 containers: []
	W1222 22:58:47.954328  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:58:47.954376  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:58:47.971746  158374 logs.go:282] 0 containers: []
	W1222 22:58:47.971762  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:58:47.971806  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:58:47.989993  158374 logs.go:282] 0 containers: []
	W1222 22:58:47.990008  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:58:47.990054  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:58:48.008188  158374 logs.go:282] 0 containers: []
	W1222 22:58:48.008204  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:58:48.008215  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:58:48.008227  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:58:48.056174  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:58:48.056192  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:58:48.071128  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:58:48.071143  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:58:48.124584  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:58:48.117861   20249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:48.118352   20249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:48.119971   20249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:48.120348   20249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:48.121842   20249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:58:48.117861   20249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:48.118352   20249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:48.119971   20249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:48.120348   20249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:48.121842   20249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:58:48.124621  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:58:48.124635  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:58:48.155889  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:58:48.155907  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:58:50.685742  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:50.696961  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:58:50.716371  158374 logs.go:282] 0 containers: []
	W1222 22:58:50.716385  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:58:50.716430  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:58:50.734780  158374 logs.go:282] 0 containers: []
	W1222 22:58:50.734798  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:58:50.734842  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:58:50.753152  158374 logs.go:282] 0 containers: []
	W1222 22:58:50.753169  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:58:50.753213  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:58:50.771281  158374 logs.go:282] 0 containers: []
	W1222 22:58:50.771296  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:58:50.771338  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:58:50.788814  158374 logs.go:282] 0 containers: []
	W1222 22:58:50.788826  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:58:50.788872  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:58:50.806768  158374 logs.go:282] 0 containers: []
	W1222 22:58:50.806781  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:58:50.806837  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:58:50.824539  158374 logs.go:282] 0 containers: []
	W1222 22:58:50.824552  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:58:50.824561  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:58:50.824581  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:58:50.873346  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:58:50.873363  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:58:50.888174  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:58:50.888188  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:58:50.942890  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:58:50.935978   20403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:50.936526   20403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:50.938077   20403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:50.938499   20403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:50.939984   20403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:58:50.935978   20403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:50.936526   20403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:50.938077   20403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:50.938499   20403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:50.939984   20403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:58:50.942904  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:58:50.942915  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:58:50.971205  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:58:50.971223  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:58:53.500770  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:53.512538  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:58:53.535794  158374 logs.go:282] 0 containers: []
	W1222 22:58:53.535812  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:58:53.535872  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:58:53.554667  158374 logs.go:282] 0 containers: []
	W1222 22:58:53.554684  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:58:53.554739  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:58:53.573251  158374 logs.go:282] 0 containers: []
	W1222 22:58:53.573267  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:58:53.573317  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:58:53.591664  158374 logs.go:282] 0 containers: []
	W1222 22:58:53.591686  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:58:53.591739  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:58:53.610128  158374 logs.go:282] 0 containers: []
	W1222 22:58:53.610141  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:58:53.610183  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:58:53.628089  158374 logs.go:282] 0 containers: []
	W1222 22:58:53.628105  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:58:53.628148  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:58:53.645890  158374 logs.go:282] 0 containers: []
	W1222 22:58:53.645908  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:58:53.645919  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:58:53.645932  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:58:53.692043  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:58:53.692062  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:58:53.707092  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:58:53.707107  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:58:53.761308  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:58:53.754291   20560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:53.754841   20560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:53.756416   20560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:53.756807   20560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:53.758250   20560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:58:53.754291   20560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:53.754841   20560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:53.756416   20560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:53.756807   20560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:53.758250   20560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:58:53.761320  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:58:53.761331  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:58:53.789713  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:58:53.789730  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:58:56.318819  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:56.329790  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:58:56.348795  158374 logs.go:282] 0 containers: []
	W1222 22:58:56.348808  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:58:56.348851  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:58:56.366850  158374 logs.go:282] 0 containers: []
	W1222 22:58:56.366866  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:58:56.366932  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:58:56.385468  158374 logs.go:282] 0 containers: []
	W1222 22:58:56.385483  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:58:56.385530  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:58:56.404330  158374 logs.go:282] 0 containers: []
	W1222 22:58:56.404345  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:58:56.404406  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:58:56.422533  158374 logs.go:282] 0 containers: []
	W1222 22:58:56.422549  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:58:56.422631  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:58:56.440667  158374 logs.go:282] 0 containers: []
	W1222 22:58:56.440681  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:58:56.440742  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:58:56.459073  158374 logs.go:282] 0 containers: []
	W1222 22:58:56.459088  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:58:56.459099  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:58:56.459113  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:58:56.506766  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:58:56.506783  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:58:56.523645  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:58:56.523667  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:58:56.580517  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:58:56.573333   20718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:56.573875   20718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:56.575514   20718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:56.576017   20718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:56.577576   20718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:58:56.573333   20718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:56.573875   20718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:56.575514   20718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:56.576017   20718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:56.577576   20718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:58:56.580531  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:58:56.580543  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:58:56.610571  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:58:56.610588  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:58:59.140001  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:59.151059  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:58:59.169787  158374 logs.go:282] 0 containers: []
	W1222 22:58:59.169801  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:58:59.169840  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:58:59.187907  158374 logs.go:282] 0 containers: []
	W1222 22:58:59.187919  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:58:59.187959  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:58:59.206755  158374 logs.go:282] 0 containers: []
	W1222 22:58:59.206770  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:58:59.206811  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:58:59.225123  158374 logs.go:282] 0 containers: []
	W1222 22:58:59.225139  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:58:59.225179  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:58:59.243400  158374 logs.go:282] 0 containers: []
	W1222 22:58:59.243414  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:58:59.243453  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:58:59.261475  158374 logs.go:282] 0 containers: []
	W1222 22:58:59.261492  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:58:59.261556  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:58:59.279819  158374 logs.go:282] 0 containers: []
	W1222 22:58:59.279834  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:58:59.279844  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:58:59.279855  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:58:59.295024  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:58:59.295046  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:58:59.349874  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:58:59.343012   20862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:59.343571   20862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:59.345099   20862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:59.345548   20862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:59.347033   20862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:58:59.343012   20862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:59.343571   20862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:59.345099   20862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:59.345548   20862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:59.347033   20862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:58:59.349889  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:58:59.349902  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:58:59.381356  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:58:59.381378  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:58:59.409144  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:58:59.409160  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:01.955340  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:01.966166  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:01.985075  158374 logs.go:282] 0 containers: []
	W1222 22:59:01.985088  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:01.985135  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:02.003681  158374 logs.go:282] 0 containers: []
	W1222 22:59:02.003695  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:02.003748  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:02.022064  158374 logs.go:282] 0 containers: []
	W1222 22:59:02.022081  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:02.022127  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:02.040290  158374 logs.go:282] 0 containers: []
	W1222 22:59:02.040302  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:02.040346  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:02.058109  158374 logs.go:282] 0 containers: []
	W1222 22:59:02.058123  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:02.058167  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:02.076398  158374 logs.go:282] 0 containers: []
	W1222 22:59:02.076415  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:02.076469  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:02.095264  158374 logs.go:282] 0 containers: []
	W1222 22:59:02.095326  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:02.095338  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:02.095350  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:02.140655  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:02.140678  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:02.156234  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:02.156248  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:02.212079  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:02.205182   21023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:02.205716   21023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:02.207280   21023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:02.207782   21023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:02.209314   21023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:02.205182   21023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:02.205716   21023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:02.207280   21023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:02.207782   21023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:02.209314   21023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:02.212094  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:02.212106  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:02.241399  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:02.241415  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:04.771709  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:04.783605  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:04.802797  158374 logs.go:282] 0 containers: []
	W1222 22:59:04.802811  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:04.802907  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:04.822172  158374 logs.go:282] 0 containers: []
	W1222 22:59:04.822187  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:04.822232  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:04.840265  158374 logs.go:282] 0 containers: []
	W1222 22:59:04.840280  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:04.840320  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:04.858270  158374 logs.go:282] 0 containers: []
	W1222 22:59:04.858287  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:04.858329  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:04.876142  158374 logs.go:282] 0 containers: []
	W1222 22:59:04.876158  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:04.876204  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:04.894156  158374 logs.go:282] 0 containers: []
	W1222 22:59:04.894169  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:04.894209  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:04.912355  158374 logs.go:282] 0 containers: []
	W1222 22:59:04.912373  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:04.912383  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:04.912393  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:04.940312  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:04.940332  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:04.985353  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:04.985370  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:05.000242  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:05.000264  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:05.054276  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:05.047325   21197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:05.047883   21197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:05.049424   21197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:05.049887   21197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:05.051401   21197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:05.047325   21197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:05.047883   21197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:05.049424   21197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:05.049887   21197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:05.051401   21197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:05.054288  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:05.054298  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:07.583327  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:07.594487  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:07.614008  158374 logs.go:282] 0 containers: []
	W1222 22:59:07.614023  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:07.614073  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:07.633345  158374 logs.go:282] 0 containers: []
	W1222 22:59:07.633364  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:07.633410  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:07.651888  158374 logs.go:282] 0 containers: []
	W1222 22:59:07.651900  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:07.651939  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:07.670373  158374 logs.go:282] 0 containers: []
	W1222 22:59:07.670389  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:07.670431  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:07.687752  158374 logs.go:282] 0 containers: []
	W1222 22:59:07.687772  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:07.687819  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:07.707382  158374 logs.go:282] 0 containers: []
	W1222 22:59:07.707397  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:07.707449  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:07.725692  158374 logs.go:282] 0 containers: []
	W1222 22:59:07.725705  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:07.725714  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:07.725724  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:07.741276  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:07.741290  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:07.807688  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:07.800646   21327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:07.801223   21327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:07.802769   21327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:07.803177   21327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:07.804708   21327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:07.800646   21327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:07.801223   21327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:07.802769   21327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:07.803177   21327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:07.804708   21327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:07.807698  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:07.807708  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:07.838193  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:07.838211  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:07.867411  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:07.867429  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:10.417278  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:10.428172  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:10.447192  158374 logs.go:282] 0 containers: []
	W1222 22:59:10.447210  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:10.447268  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:10.465742  158374 logs.go:282] 0 containers: []
	W1222 22:59:10.465755  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:10.465802  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:10.483930  158374 logs.go:282] 0 containers: []
	W1222 22:59:10.483943  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:10.483982  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:10.502550  158374 logs.go:282] 0 containers: []
	W1222 22:59:10.502564  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:10.502631  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:10.521157  158374 logs.go:282] 0 containers: []
	W1222 22:59:10.521170  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:10.521217  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:10.539930  158374 logs.go:282] 0 containers: []
	W1222 22:59:10.539944  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:10.539988  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:10.557819  158374 logs.go:282] 0 containers: []
	W1222 22:59:10.557836  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:10.557847  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:10.557860  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:10.586007  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:10.586023  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:10.630906  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:10.630928  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:10.645969  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:10.645986  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:10.700369  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:10.693704   21501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:10.694182   21501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:10.695738   21501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:10.696170   21501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:10.697667   21501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:10.693704   21501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:10.694182   21501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:10.695738   21501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:10.696170   21501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:10.697667   21501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:10.700383  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:10.700396  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:13.229999  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:13.241245  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:13.261612  158374 logs.go:282] 0 containers: []
	W1222 22:59:13.261629  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:13.261685  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:13.279825  158374 logs.go:282] 0 containers: []
	W1222 22:59:13.279843  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:13.279893  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:13.297933  158374 logs.go:282] 0 containers: []
	W1222 22:59:13.297951  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:13.298008  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:13.316218  158374 logs.go:282] 0 containers: []
	W1222 22:59:13.316235  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:13.316315  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:13.334375  158374 logs.go:282] 0 containers: []
	W1222 22:59:13.334389  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:13.334444  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:13.353104  158374 logs.go:282] 0 containers: []
	W1222 22:59:13.353123  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:13.353179  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:13.371772  158374 logs.go:282] 0 containers: []
	W1222 22:59:13.371791  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:13.371802  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:13.371816  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:13.419777  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:13.419800  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:13.435473  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:13.435489  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:13.490824  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:13.484010   21640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:13.484560   21640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:13.486103   21640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:13.486541   21640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:13.488011   21640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:13.484010   21640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:13.484560   21640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:13.486103   21640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:13.486541   21640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:13.488011   21640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:13.490835  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:13.490848  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:13.519782  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:13.519800  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:16.052715  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:16.064085  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:16.083176  158374 logs.go:282] 0 containers: []
	W1222 22:59:16.083195  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:16.083255  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:16.102468  158374 logs.go:282] 0 containers: []
	W1222 22:59:16.102485  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:16.102532  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:16.121564  158374 logs.go:282] 0 containers: []
	W1222 22:59:16.121580  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:16.121654  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:16.140862  158374 logs.go:282] 0 containers: []
	W1222 22:59:16.140879  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:16.140928  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:16.159281  158374 logs.go:282] 0 containers: []
	W1222 22:59:16.159295  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:16.159347  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:16.177569  158374 logs.go:282] 0 containers: []
	W1222 22:59:16.177606  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:16.177659  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:16.196491  158374 logs.go:282] 0 containers: []
	W1222 22:59:16.196507  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:16.196516  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:16.196526  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:16.225379  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:16.225399  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:16.270312  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:16.270332  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:16.285737  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:16.285752  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:16.339892  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:16.332955   21812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:16.333507   21812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:16.335037   21812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:16.335481   21812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:16.337003   21812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:16.332955   21812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:16.333507   21812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:16.335037   21812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:16.335481   21812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:16.337003   21812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:16.339906  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:16.339924  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:18.870402  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:18.881333  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:18.899917  158374 logs.go:282] 0 containers: []
	W1222 22:59:18.899940  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:18.899987  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:18.918652  158374 logs.go:282] 0 containers: []
	W1222 22:59:18.918666  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:18.918711  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:18.936854  158374 logs.go:282] 0 containers: []
	W1222 22:59:18.936871  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:18.936930  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:18.956082  158374 logs.go:282] 0 containers: []
	W1222 22:59:18.956099  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:18.956148  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:18.974672  158374 logs.go:282] 0 containers: []
	W1222 22:59:18.974690  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:18.974747  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:18.993264  158374 logs.go:282] 0 containers: []
	W1222 22:59:18.993281  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:18.993330  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:19.013308  158374 logs.go:282] 0 containers: []
	W1222 22:59:19.013325  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:19.013335  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:19.013346  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:19.063311  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:19.063330  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:19.078990  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:19.079012  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:19.135746  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:19.127970   21956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:19.128563   21956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:19.130146   21956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:19.130556   21956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:19.132525   21956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:19.127970   21956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:19.128563   21956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:19.130146   21956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:19.130556   21956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:19.132525   21956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:19.135757  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:19.135778  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:19.165331  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:19.165348  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:21.694471  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:21.705412  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:21.724588  158374 logs.go:282] 0 containers: []
	W1222 22:59:21.724617  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:21.724663  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:21.744659  158374 logs.go:282] 0 containers: []
	W1222 22:59:21.744677  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:21.744732  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:21.762841  158374 logs.go:282] 0 containers: []
	W1222 22:59:21.762858  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:21.762913  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:21.782008  158374 logs.go:282] 0 containers: []
	W1222 22:59:21.782023  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:21.782064  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:21.801013  158374 logs.go:282] 0 containers: []
	W1222 22:59:21.801031  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:21.801077  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:21.817861  158374 logs.go:282] 0 containers: []
	W1222 22:59:21.817879  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:21.817936  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:21.836076  158374 logs.go:282] 0 containers: []
	W1222 22:59:21.836093  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:21.836104  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:21.836115  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:21.884827  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:21.884849  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:21.900053  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:21.900069  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:21.955238  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:21.947967   22101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:21.948481   22101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:21.950007   22101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:21.950444   22101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:21.952014   22101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:21.947967   22101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:21.948481   22101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:21.950007   22101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:21.950444   22101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:21.952014   22101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:21.955248  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:21.955258  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:21.984138  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:21.984157  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:24.515104  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:24.526883  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:24.546166  158374 logs.go:282] 0 containers: []
	W1222 22:59:24.546180  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:24.546228  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:24.565305  158374 logs.go:282] 0 containers: []
	W1222 22:59:24.565319  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:24.565361  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:24.584559  158374 logs.go:282] 0 containers: []
	W1222 22:59:24.584572  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:24.584631  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:24.604650  158374 logs.go:282] 0 containers: []
	W1222 22:59:24.604664  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:24.604712  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:24.623346  158374 logs.go:282] 0 containers: []
	W1222 22:59:24.623362  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:24.623412  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:24.642324  158374 logs.go:282] 0 containers: []
	W1222 22:59:24.642343  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:24.642406  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:24.661990  158374 logs.go:282] 0 containers: []
	W1222 22:59:24.662004  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:24.662013  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:24.662024  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:24.677840  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:24.677855  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:24.734271  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:24.726969   22258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:24.727498   22258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:24.729051   22258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:24.729473   22258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:24.731041   22258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:24.726969   22258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:24.727498   22258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:24.729051   22258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:24.729473   22258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:24.731041   22258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:24.734289  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:24.734304  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:24.764562  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:24.764580  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:24.793099  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:24.793115  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:27.340497  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:27.351904  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:27.372400  158374 logs.go:282] 0 containers: []
	W1222 22:59:27.372419  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:27.372472  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:27.392295  158374 logs.go:282] 0 containers: []
	W1222 22:59:27.392312  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:27.392363  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:27.411771  158374 logs.go:282] 0 containers: []
	W1222 22:59:27.411784  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:27.411828  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:27.430497  158374 logs.go:282] 0 containers: []
	W1222 22:59:27.430512  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:27.430558  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:27.449983  158374 logs.go:282] 0 containers: []
	W1222 22:59:27.449999  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:27.450044  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:27.469696  158374 logs.go:282] 0 containers: []
	W1222 22:59:27.469714  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:27.469771  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:27.488685  158374 logs.go:282] 0 containers: []
	W1222 22:59:27.488702  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:27.488715  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:27.488730  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:27.517546  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:27.517564  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:27.564530  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:27.564554  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:27.579944  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:27.579963  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:27.636369  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:27.629189   22431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:27.629734   22431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:27.631292   22431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:27.631750   22431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:27.633229   22431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:27.629189   22431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:27.629734   22431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:27.631292   22431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:27.631750   22431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:27.633229   22431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:27.636383  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:27.636394  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:30.168117  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:30.179633  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:30.199078  158374 logs.go:282] 0 containers: []
	W1222 22:59:30.199094  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:30.199144  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:30.218504  158374 logs.go:282] 0 containers: []
	W1222 22:59:30.218517  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:30.218559  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:30.237792  158374 logs.go:282] 0 containers: []
	W1222 22:59:30.237810  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:30.237858  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:30.257058  158374 logs.go:282] 0 containers: []
	W1222 22:59:30.257073  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:30.257118  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:30.277405  158374 logs.go:282] 0 containers: []
	W1222 22:59:30.277422  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:30.277475  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:30.297453  158374 logs.go:282] 0 containers: []
	W1222 22:59:30.297467  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:30.297515  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:30.316894  158374 logs.go:282] 0 containers: []
	W1222 22:59:30.316915  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:30.316924  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:30.316936  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:30.346684  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:30.346705  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:30.376362  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:30.376378  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:30.422918  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:30.422940  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:30.438917  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:30.438935  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:30.494621  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:30.487113   22590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:30.487790   22590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:30.489338   22590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:30.489779   22590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:30.491378   22590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:30.487113   22590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:30.487790   22590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:30.489338   22590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:30.489779   22590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:30.491378   22590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:32.995681  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:33.006896  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:33.026274  158374 logs.go:282] 0 containers: []
	W1222 22:59:33.026292  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:33.026336  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:33.045071  158374 logs.go:282] 0 containers: []
	W1222 22:59:33.045087  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:33.045134  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:33.064583  158374 logs.go:282] 0 containers: []
	W1222 22:59:33.064611  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:33.064660  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:33.085351  158374 logs.go:282] 0 containers: []
	W1222 22:59:33.085374  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:33.085431  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:33.103978  158374 logs.go:282] 0 containers: []
	W1222 22:59:33.103991  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:33.104045  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:33.123168  158374 logs.go:282] 0 containers: []
	W1222 22:59:33.123186  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:33.123241  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:33.143080  158374 logs.go:282] 0 containers: []
	W1222 22:59:33.143095  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:33.143105  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:33.143116  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:33.197825  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:33.190806   22712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:33.191305   22712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:33.192895   22712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:33.193360   22712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:33.194895   22712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:33.190806   22712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:33.191305   22712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:33.192895   22712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:33.193360   22712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:33.194895   22712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:33.197836  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:33.197850  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:33.226457  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:33.226476  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:33.257519  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:33.257546  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:33.309950  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:33.309971  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:35.827217  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:35.838617  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:35.858342  158374 logs.go:282] 0 containers: []
	W1222 22:59:35.858358  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:35.858412  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:35.877344  158374 logs.go:282] 0 containers: []
	W1222 22:59:35.877362  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:35.877416  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:35.897833  158374 logs.go:282] 0 containers: []
	W1222 22:59:35.897848  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:35.897902  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:35.916409  158374 logs.go:282] 0 containers: []
	W1222 22:59:35.916428  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:35.916485  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:35.935688  158374 logs.go:282] 0 containers: []
	W1222 22:59:35.935705  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:35.935766  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:35.954858  158374 logs.go:282] 0 containers: []
	W1222 22:59:35.954876  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:35.954924  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:35.973729  158374 logs.go:282] 0 containers: []
	W1222 22:59:35.973746  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:35.973757  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:35.973767  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:36.002045  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:36.002069  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:36.029933  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:36.029949  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:36.075963  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:36.075988  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:36.091711  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:36.091734  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:36.147521  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:36.140402   22889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:36.141008   22889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:36.142529   22889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:36.142962   22889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:36.144445   22889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:36.140402   22889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:36.141008   22889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:36.142529   22889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:36.142962   22889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:36.144445   22889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:38.649172  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:38.660310  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:38.679380  158374 logs.go:282] 0 containers: []
	W1222 22:59:38.679396  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:38.679449  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:38.698305  158374 logs.go:282] 0 containers: []
	W1222 22:59:38.698318  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:38.698365  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:38.717524  158374 logs.go:282] 0 containers: []
	W1222 22:59:38.717541  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:38.717601  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:38.736808  158374 logs.go:282] 0 containers: []
	W1222 22:59:38.736822  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:38.736874  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:38.756003  158374 logs.go:282] 0 containers: []
	W1222 22:59:38.756017  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:38.756061  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:38.774845  158374 logs.go:282] 0 containers: []
	W1222 22:59:38.774858  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:38.774901  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:38.793240  158374 logs.go:282] 0 containers: []
	W1222 22:59:38.793257  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:38.793269  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:38.793281  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:38.821390  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:38.821407  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:38.868649  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:38.868671  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:38.884729  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:38.884749  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:38.940189  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:38.933255   23042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:38.933828   23042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:38.935320   23042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:38.935747   23042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:38.937211   23042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:38.933255   23042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:38.933828   23042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:38.935320   23042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:38.935747   23042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:38.937211   23042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:38.940200  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:38.940211  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:41.470854  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:41.481957  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:41.501032  158374 logs.go:282] 0 containers: []
	W1222 22:59:41.501051  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:41.501102  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:41.522720  158374 logs.go:282] 0 containers: []
	W1222 22:59:41.522740  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:41.522799  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:41.544756  158374 logs.go:282] 0 containers: []
	W1222 22:59:41.544769  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:41.544812  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:41.564773  158374 logs.go:282] 0 containers: []
	W1222 22:59:41.564789  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:41.565312  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:41.586087  158374 logs.go:282] 0 containers: []
	W1222 22:59:41.586104  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:41.586156  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:41.604141  158374 logs.go:282] 0 containers: []
	W1222 22:59:41.604156  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:41.604206  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:41.623828  158374 logs.go:282] 0 containers: []
	W1222 22:59:41.623846  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:41.623858  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:41.623870  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:41.652778  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:41.652798  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:41.680995  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:41.681014  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:41.728777  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:41.728800  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:41.744897  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:41.744916  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:41.800644  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:41.793494   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:41.794046   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:41.795564   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:41.796035   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:41.797584   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:41.793494   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:41.794046   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:41.795564   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:41.796035   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:41.797584   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:44.302472  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:44.313688  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:44.333253  158374 logs.go:282] 0 containers: []
	W1222 22:59:44.333267  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:44.333313  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:44.352778  158374 logs.go:282] 0 containers: []
	W1222 22:59:44.352793  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:44.352851  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:44.372079  158374 logs.go:282] 0 containers: []
	W1222 22:59:44.372093  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:44.372135  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:44.390683  158374 logs.go:282] 0 containers: []
	W1222 22:59:44.390701  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:44.390761  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:44.409168  158374 logs.go:282] 0 containers: []
	W1222 22:59:44.409185  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:44.409259  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:44.426368  158374 logs.go:282] 0 containers: []
	W1222 22:59:44.426381  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:44.426426  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:44.444108  158374 logs.go:282] 0 containers: []
	W1222 22:59:44.444124  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:44.444138  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:44.444148  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:44.481663  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:44.481679  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:44.529101  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:44.529121  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:44.546062  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:44.546081  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:44.600660  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:44.593909   23362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:44.594437   23362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:44.595959   23362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:44.596345   23362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:44.597904   23362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:44.593909   23362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:44.594437   23362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:44.595959   23362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:44.596345   23362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:44.597904   23362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:44.600672  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:44.600684  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:47.129588  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:47.140641  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:47.159435  158374 logs.go:282] 0 containers: []
	W1222 22:59:47.159453  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:47.159498  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:47.178540  158374 logs.go:282] 0 containers: []
	W1222 22:59:47.178560  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:47.178634  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:47.198365  158374 logs.go:282] 0 containers: []
	W1222 22:59:47.198383  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:47.198438  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:47.217411  158374 logs.go:282] 0 containers: []
	W1222 22:59:47.217429  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:47.217479  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:47.236273  158374 logs.go:282] 0 containers: []
	W1222 22:59:47.236287  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:47.236330  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:47.255917  158374 logs.go:282] 0 containers: []
	W1222 22:59:47.255930  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:47.255973  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:47.274750  158374 logs.go:282] 0 containers: []
	W1222 22:59:47.274768  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:47.274779  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:47.274792  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:47.322428  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:47.322452  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:47.339666  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:47.339691  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:47.396552  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:47.389135   23492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:47.389730   23492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:47.391303   23492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:47.391736   23492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:47.393254   23492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:47.389135   23492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:47.389730   23492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:47.391303   23492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:47.391736   23492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:47.393254   23492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:47.396562  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:47.396574  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:47.425768  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:47.425785  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:49.955844  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:49.966834  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:49.985390  158374 logs.go:282] 0 containers: []
	W1222 22:59:49.985405  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:49.985446  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:50.003669  158374 logs.go:282] 0 containers: []
	W1222 22:59:50.003687  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:50.003735  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:50.023188  158374 logs.go:282] 0 containers: []
	W1222 22:59:50.023203  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:50.023254  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:50.042292  158374 logs.go:282] 0 containers: []
	W1222 22:59:50.042309  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:50.042360  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:50.060457  158374 logs.go:282] 0 containers: []
	W1222 22:59:50.060471  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:50.060516  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:50.078548  158374 logs.go:282] 0 containers: []
	W1222 22:59:50.078565  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:50.078666  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:50.096685  158374 logs.go:282] 0 containers: []
	W1222 22:59:50.096704  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:50.096717  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:50.096730  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:50.125658  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:50.125680  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:50.173107  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:50.173124  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:50.188136  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:50.188152  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:50.242225  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:50.235426   23662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:50.235931   23662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:50.237475   23662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:50.237960   23662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:50.239487   23662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:50.235426   23662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:50.235931   23662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:50.237475   23662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:50.237960   23662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:50.239487   23662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:50.242236  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:50.242246  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:52.771712  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:52.783330  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:52.802157  158374 logs.go:282] 0 containers: []
	W1222 22:59:52.802171  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:52.802219  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:52.820709  158374 logs.go:282] 0 containers: []
	W1222 22:59:52.820726  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:52.820777  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:52.839433  158374 logs.go:282] 0 containers: []
	W1222 22:59:52.839448  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:52.839515  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:52.857834  158374 logs.go:282] 0 containers: []
	W1222 22:59:52.857849  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:52.857903  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:52.875916  158374 logs.go:282] 0 containers: []
	W1222 22:59:52.875933  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:52.875977  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:52.893339  158374 logs.go:282] 0 containers: []
	W1222 22:59:52.893351  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:52.893394  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:52.911298  158374 logs.go:282] 0 containers: []
	W1222 22:59:52.911311  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:52.911319  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:52.911329  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:52.942377  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:52.942392  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:52.969572  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:52.969587  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:53.014323  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:53.014339  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:53.029751  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:53.029764  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:53.085527  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:53.078407   23822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:53.078987   23822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:53.080619   23822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:53.081049   23822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:53.082721   23822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:53.078407   23822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:53.078987   23822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:53.080619   23822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:53.081049   23822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:53.082721   23822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:55.587247  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:55.598436  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:55.617688  158374 logs.go:282] 0 containers: []
	W1222 22:59:55.617704  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:55.617764  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:55.637510  158374 logs.go:282] 0 containers: []
	W1222 22:59:55.637528  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:55.637585  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:55.656117  158374 logs.go:282] 0 containers: []
	W1222 22:59:55.656132  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:55.656187  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:55.675258  158374 logs.go:282] 0 containers: []
	W1222 22:59:55.675278  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:55.675327  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:55.694537  158374 logs.go:282] 0 containers: []
	W1222 22:59:55.694555  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:55.694627  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:55.711993  158374 logs.go:282] 0 containers: []
	W1222 22:59:55.712011  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:55.712056  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:55.730198  158374 logs.go:282] 0 containers: []
	W1222 22:59:55.730216  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:55.730228  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:55.730242  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:55.795390  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:55.795416  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:55.811790  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:55.811809  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:55.867201  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:55.859754   23959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:55.860414   23959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:55.862051   23959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:55.862474   23959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:55.863985   23959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:55.859754   23959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:55.860414   23959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:55.862051   23959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:55.862474   23959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:55.863985   23959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:55.867213  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:55.867224  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:55.898358  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:55.898381  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:58.428962  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:58.440024  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:58.459773  158374 logs.go:282] 0 containers: []
	W1222 22:59:58.459787  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:58.459828  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:58.478843  158374 logs.go:282] 0 containers: []
	W1222 22:59:58.478863  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:58.478920  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:58.498503  158374 logs.go:282] 0 containers: []
	W1222 22:59:58.498518  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:58.498563  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:58.518032  158374 logs.go:282] 0 containers: []
	W1222 22:59:58.518052  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:58.518110  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:58.537315  158374 logs.go:282] 0 containers: []
	W1222 22:59:58.537330  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:58.537388  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:58.556299  158374 logs.go:282] 0 containers: []
	W1222 22:59:58.556319  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:58.556368  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:58.575345  158374 logs.go:282] 0 containers: []
	W1222 22:59:58.575359  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:58.575369  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:58.575378  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:58.603490  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:58.603508  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:58.651589  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:58.651620  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:58.667341  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:58.667358  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:58.723840  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:58.716532   24120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:58.717054   24120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:58.718649   24120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:58.719098   24120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:58.720749   24120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:58.716532   24120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:58.717054   24120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:58.718649   24120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:58.719098   24120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:58.720749   24120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:58.723855  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:58.723865  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:01.257052  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:01.268153  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:01.287939  158374 logs.go:282] 0 containers: []
	W1222 23:00:01.287954  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:01.288001  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:01.306844  158374 logs.go:282] 0 containers: []
	W1222 23:00:01.306857  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:01.306904  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:01.326511  158374 logs.go:282] 0 containers: []
	W1222 23:00:01.326530  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:01.326579  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:01.345734  158374 logs.go:282] 0 containers: []
	W1222 23:00:01.345748  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:01.345793  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:01.364619  158374 logs.go:282] 0 containers: []
	W1222 23:00:01.364634  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:01.364682  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:01.383578  158374 logs.go:282] 0 containers: []
	W1222 23:00:01.383605  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:01.383654  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:01.401753  158374 logs.go:282] 0 containers: []
	W1222 23:00:01.401770  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:01.401781  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:01.401795  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:01.457583  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:01.450373   24259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:01.450906   24259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:01.452493   24259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:01.452946   24259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:01.454493   24259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:01.450373   24259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:01.450906   24259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:01.452493   24259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:01.452946   24259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:01.454493   24259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:01.457611  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:01.457625  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:01.486870  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:01.486891  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:01.514587  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:01.514619  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:01.561028  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:01.561052  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:04.078615  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:04.089843  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:04.109432  158374 logs.go:282] 0 containers: []
	W1222 23:00:04.109450  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:04.109498  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:04.128585  158374 logs.go:282] 0 containers: []
	W1222 23:00:04.128630  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:04.128680  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:04.147830  158374 logs.go:282] 0 containers: []
	W1222 23:00:04.147846  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:04.147901  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:04.166672  158374 logs.go:282] 0 containers: []
	W1222 23:00:04.166686  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:04.166730  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:04.185500  158374 logs.go:282] 0 containers: []
	W1222 23:00:04.185523  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:04.185574  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:04.204345  158374 logs.go:282] 0 containers: []
	W1222 23:00:04.204360  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:04.204404  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:04.222488  158374 logs.go:282] 0 containers: []
	W1222 23:00:04.222503  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:04.222513  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:04.222523  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:04.252225  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:04.252244  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:04.280489  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:04.280507  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:04.329635  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:04.329657  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:04.345631  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:04.345650  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:04.400851  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:04.393656   24445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:04.394230   24445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:04.395887   24445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:04.396281   24445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:04.397838   24445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:04.393656   24445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:04.394230   24445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:04.395887   24445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:04.396281   24445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:04.397838   24445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:06.901498  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:06.913084  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:06.932724  158374 logs.go:282] 0 containers: []
	W1222 23:00:06.932739  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:06.932793  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:06.951127  158374 logs.go:282] 0 containers: []
	W1222 23:00:06.951146  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:06.951187  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:06.969488  158374 logs.go:282] 0 containers: []
	W1222 23:00:06.969501  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:06.969543  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:06.987763  158374 logs.go:282] 0 containers: []
	W1222 23:00:06.987780  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:06.987824  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:07.005884  158374 logs.go:282] 0 containers: []
	W1222 23:00:07.005900  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:07.005951  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:07.026370  158374 logs.go:282] 0 containers: []
	W1222 23:00:07.026397  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:07.026449  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:07.047472  158374 logs.go:282] 0 containers: []
	W1222 23:00:07.047486  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:07.047496  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:07.047505  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:07.092662  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:07.092679  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:07.107657  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:07.107672  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:07.162182  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:07.155104   24590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:07.155706   24590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:07.157237   24590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:07.157737   24590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:07.159269   24590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:07.155104   24590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:07.155706   24590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:07.157237   24590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:07.157737   24590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:07.159269   24590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:07.162193  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:07.162203  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:07.190466  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:07.190482  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:09.719767  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:09.730961  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:09.750004  158374 logs.go:282] 0 containers: []
	W1222 23:00:09.750021  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:09.750061  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:09.768191  158374 logs.go:282] 0 containers: []
	W1222 23:00:09.768203  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:09.768240  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:09.785655  158374 logs.go:282] 0 containers: []
	W1222 23:00:09.785668  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:09.785715  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:09.803931  158374 logs.go:282] 0 containers: []
	W1222 23:00:09.803946  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:09.803987  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:09.823040  158374 logs.go:282] 0 containers: []
	W1222 23:00:09.823058  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:09.823105  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:09.841359  158374 logs.go:282] 0 containers: []
	W1222 23:00:09.841373  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:09.841413  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:09.859786  158374 logs.go:282] 0 containers: []
	W1222 23:00:09.859799  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:09.859812  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:09.859824  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:09.905428  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:09.905445  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:09.920496  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:09.920511  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:09.974948  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:09.968124   24736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:09.968667   24736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:09.970182   24736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:09.970583   24736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:09.972119   24736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:09.968124   24736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:09.968667   24736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:09.970182   24736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:09.970583   24736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:09.972119   24736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:09.974969  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:09.974982  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:10.003466  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:10.003485  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:12.535644  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:12.546867  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:12.565761  158374 logs.go:282] 0 containers: []
	W1222 23:00:12.565778  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:12.565825  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:12.584431  158374 logs.go:282] 0 containers: []
	W1222 23:00:12.584446  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:12.584504  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:12.602950  158374 logs.go:282] 0 containers: []
	W1222 23:00:12.602966  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:12.603009  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:12.621210  158374 logs.go:282] 0 containers: []
	W1222 23:00:12.621224  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:12.621268  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:12.639377  158374 logs.go:282] 0 containers: []
	W1222 23:00:12.639393  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:12.639444  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:12.657924  158374 logs.go:282] 0 containers: []
	W1222 23:00:12.657941  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:12.657984  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:12.676311  158374 logs.go:282] 0 containers: []
	W1222 23:00:12.676326  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:12.676336  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:12.676346  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:12.703500  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:12.703515  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:12.750933  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:12.750951  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:12.766856  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:12.766870  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:12.822138  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:12.815289   24904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:12.815808   24904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:12.817339   24904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:12.817801   24904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:12.819283   24904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:12.815289   24904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:12.815808   24904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:12.817339   24904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:12.817801   24904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:12.819283   24904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:12.822170  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:12.822269  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:15.355685  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:15.366722  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:15.385319  158374 logs.go:282] 0 containers: []
	W1222 23:00:15.385334  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:15.385401  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:15.402653  158374 logs.go:282] 0 containers: []
	W1222 23:00:15.402666  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:15.402712  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:15.420695  158374 logs.go:282] 0 containers: []
	W1222 23:00:15.420709  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:15.420757  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:15.438422  158374 logs.go:282] 0 containers: []
	W1222 23:00:15.438438  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:15.438488  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:15.457961  158374 logs.go:282] 0 containers: []
	W1222 23:00:15.457978  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:15.458023  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:15.477016  158374 logs.go:282] 0 containers: []
	W1222 23:00:15.477031  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:15.477075  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:15.495320  158374 logs.go:282] 0 containers: []
	W1222 23:00:15.495335  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:15.495346  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:15.495363  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:15.542697  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:15.542716  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:15.557986  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:15.558002  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:15.613071  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:15.605742   25048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:15.606273   25048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:15.607863   25048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:15.608322   25048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:15.609898   25048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:15.605742   25048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:15.606273   25048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:15.607863   25048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:15.608322   25048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:15.609898   25048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:15.613082  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:15.613093  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:15.643893  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:15.643912  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:18.176478  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:18.187435  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:18.206820  158374 logs.go:282] 0 containers: []
	W1222 23:00:18.206836  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:18.206885  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:18.225162  158374 logs.go:282] 0 containers: []
	W1222 23:00:18.225179  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:18.225242  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:18.244089  158374 logs.go:282] 0 containers: []
	W1222 23:00:18.244106  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:18.244149  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:18.263582  158374 logs.go:282] 0 containers: []
	W1222 23:00:18.263618  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:18.263678  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:18.285421  158374 logs.go:282] 0 containers: []
	W1222 23:00:18.285439  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:18.285483  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:18.304575  158374 logs.go:282] 0 containers: []
	W1222 23:00:18.304616  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:18.304679  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:18.322814  158374 logs.go:282] 0 containers: []
	W1222 23:00:18.322831  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:18.322842  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:18.322853  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:18.367678  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:18.367695  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:18.384038  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:18.384060  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:18.439158  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:18.432401   25210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:18.432947   25210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:18.434455   25210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:18.434840   25210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:18.436266   25210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:18.432401   25210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:18.432947   25210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:18.434455   25210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:18.434840   25210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:18.436266   25210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:18.439172  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:18.439186  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:18.468274  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:18.468290  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:20.996786  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:21.007676  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:21.026577  158374 logs.go:282] 0 containers: []
	W1222 23:00:21.026589  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:21.026662  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:21.045179  158374 logs.go:282] 0 containers: []
	W1222 23:00:21.045195  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:21.045237  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:21.064216  158374 logs.go:282] 0 containers: []
	W1222 23:00:21.064230  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:21.064278  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:21.082929  158374 logs.go:282] 0 containers: []
	W1222 23:00:21.082946  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:21.082991  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:21.101298  158374 logs.go:282] 0 containers: []
	W1222 23:00:21.101314  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:21.101372  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:21.119708  158374 logs.go:282] 0 containers: []
	W1222 23:00:21.119719  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:21.119759  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:21.137828  158374 logs.go:282] 0 containers: []
	W1222 23:00:21.137841  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:21.137849  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:21.137859  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:21.167198  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:21.167214  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:21.194956  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:21.194974  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:21.243666  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:21.243687  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:21.259092  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:21.259108  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:21.316128  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:21.309163   25383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:21.309711   25383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:21.311266   25383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:21.311721   25383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:21.313183   25383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:21.309163   25383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:21.309711   25383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:21.311266   25383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:21.311721   25383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:21.313183   25383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:23.817830  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:23.829010  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:23.847819  158374 logs.go:282] 0 containers: []
	W1222 23:00:23.847833  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:23.847883  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:23.866626  158374 logs.go:282] 0 containers: []
	W1222 23:00:23.866640  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:23.866685  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:23.884038  158374 logs.go:282] 0 containers: []
	W1222 23:00:23.884053  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:23.884099  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:23.903021  158374 logs.go:282] 0 containers: []
	W1222 23:00:23.903037  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:23.903091  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:23.921758  158374 logs.go:282] 0 containers: []
	W1222 23:00:23.921771  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:23.921817  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:23.940118  158374 logs.go:282] 0 containers: []
	W1222 23:00:23.940135  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:23.940176  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:23.958805  158374 logs.go:282] 0 containers: []
	W1222 23:00:23.958817  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:23.958826  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:23.958836  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:24.006524  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:24.006542  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:24.021579  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:24.021602  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:24.077965  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:24.070852   25512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:24.071400   25512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:24.072995   25512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:24.073395   25512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:24.074970   25512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:24.070852   25512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:24.071400   25512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:24.072995   25512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:24.073395   25512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:24.074970   25512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:24.077976  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:24.077986  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:24.107448  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:24.107464  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:26.635419  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:26.646546  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:26.665787  158374 logs.go:282] 0 containers: []
	W1222 23:00:26.665805  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:26.665856  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:26.683869  158374 logs.go:282] 0 containers: []
	W1222 23:00:26.683885  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:26.683930  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:26.702549  158374 logs.go:282] 0 containers: []
	W1222 23:00:26.702565  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:26.702628  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:26.720884  158374 logs.go:282] 0 containers: []
	W1222 23:00:26.720901  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:26.720947  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:26.739437  158374 logs.go:282] 0 containers: []
	W1222 23:00:26.739453  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:26.739498  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:26.757871  158374 logs.go:282] 0 containers: []
	W1222 23:00:26.757885  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:26.757927  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:26.775863  158374 logs.go:282] 0 containers: []
	W1222 23:00:26.775882  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:26.775893  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:26.775902  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:26.821886  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:26.821903  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:26.837204  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:26.837220  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:26.891970  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:26.884764   25667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:26.885231   25667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:26.886895   25667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:26.887281   25667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:26.888844   25667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:26.884764   25667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:26.885231   25667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:26.886895   25667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:26.887281   25667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:26.888844   25667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:26.891981  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:26.891991  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:26.922932  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:26.922949  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:29.452400  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:29.463551  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:29.482265  158374 logs.go:282] 0 containers: []
	W1222 23:00:29.482278  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:29.482326  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:29.501689  158374 logs.go:282] 0 containers: []
	W1222 23:00:29.501707  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:29.501762  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:29.522730  158374 logs.go:282] 0 containers: []
	W1222 23:00:29.522747  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:29.522799  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:29.542657  158374 logs.go:282] 0 containers: []
	W1222 23:00:29.542671  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:29.542720  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:29.560883  158374 logs.go:282] 0 containers: []
	W1222 23:00:29.560897  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:29.560938  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:29.579281  158374 logs.go:282] 0 containers: []
	W1222 23:00:29.579297  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:29.579340  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:29.597740  158374 logs.go:282] 0 containers: []
	W1222 23:00:29.597755  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:29.597766  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:29.597777  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:29.627231  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:29.627248  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:29.655168  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:29.655183  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:29.703330  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:29.703348  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:29.718800  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:29.718821  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:29.773515  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:29.766735   25841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:29.767220   25841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:29.768799   25841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:29.769197   25841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:29.770715   25841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:29.766735   25841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:29.767220   25841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:29.768799   25841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:29.769197   25841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:29.770715   25841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:32.274411  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:32.285356  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:32.304409  158374 logs.go:282] 0 containers: []
	W1222 23:00:32.304423  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:32.304465  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:32.324167  158374 logs.go:282] 0 containers: []
	W1222 23:00:32.324183  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:32.324228  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:32.342878  158374 logs.go:282] 0 containers: []
	W1222 23:00:32.342893  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:32.342950  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:32.362212  158374 logs.go:282] 0 containers: []
	W1222 23:00:32.362226  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:32.362268  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:32.381154  158374 logs.go:282] 0 containers: []
	W1222 23:00:32.381171  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:32.381229  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:32.400512  158374 logs.go:282] 0 containers: []
	W1222 23:00:32.400533  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:32.400587  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:32.419083  158374 logs.go:282] 0 containers: []
	W1222 23:00:32.419097  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:32.419112  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:32.419121  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:32.466805  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:32.466824  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:32.482931  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:32.482947  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:32.543407  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:32.536205   25971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:32.536750   25971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:32.538320   25971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:32.538796   25971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:32.540418   25971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:32.536205   25971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:32.536750   25971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:32.538320   25971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:32.538796   25971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:32.540418   25971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:32.543419  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:32.543436  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:32.572975  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:32.572990  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:35.102503  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:35.113391  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:35.132097  158374 logs.go:282] 0 containers: []
	W1222 23:00:35.132109  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:35.132151  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:35.150359  158374 logs.go:282] 0 containers: []
	W1222 23:00:35.150378  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:35.150441  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:35.169070  158374 logs.go:282] 0 containers: []
	W1222 23:00:35.169088  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:35.169141  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:35.187626  158374 logs.go:282] 0 containers: []
	W1222 23:00:35.187641  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:35.187686  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:35.205837  158374 logs.go:282] 0 containers: []
	W1222 23:00:35.205854  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:35.205895  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:35.224198  158374 logs.go:282] 0 containers: []
	W1222 23:00:35.224213  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:35.224255  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:35.241750  158374 logs.go:282] 0 containers: []
	W1222 23:00:35.241765  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:35.241774  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:35.241783  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:35.286130  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:35.286145  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:35.301096  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:35.301111  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:35.356973  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:35.349818   26124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:35.350427   26124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:35.351993   26124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:35.352503   26124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:35.353993   26124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:35.349818   26124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:35.350427   26124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:35.351993   26124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:35.352503   26124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:35.353993   26124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:35.356986  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:35.356997  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:35.385504  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:35.385523  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:37.914534  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:37.925443  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:37.943928  158374 logs.go:282] 0 containers: []
	W1222 23:00:37.943942  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:37.943990  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:37.961408  158374 logs.go:282] 0 containers: []
	W1222 23:00:37.961424  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:37.961481  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:37.979364  158374 logs.go:282] 0 containers: []
	W1222 23:00:37.979380  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:37.979437  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:37.997737  158374 logs.go:282] 0 containers: []
	W1222 23:00:37.997751  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:37.997796  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:38.016341  158374 logs.go:282] 0 containers: []
	W1222 23:00:38.016358  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:38.016425  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:38.035203  158374 logs.go:282] 0 containers: []
	W1222 23:00:38.035221  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:38.035270  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:38.053655  158374 logs.go:282] 0 containers: []
	W1222 23:00:38.053672  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:38.053684  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:38.053699  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:38.100003  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:38.100022  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:38.116642  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:38.116661  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:38.170897  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:38.164046   26282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:38.164628   26282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:38.166176   26282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:38.166562   26282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:38.168042   26282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:38.164046   26282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:38.164628   26282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:38.166176   26282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:38.166562   26282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:38.168042   26282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:38.170907  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:38.170921  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:38.200254  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:38.200273  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:40.729556  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:40.740911  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:40.762071  158374 logs.go:282] 0 containers: []
	W1222 23:00:40.762085  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:40.762131  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:40.782191  158374 logs.go:282] 0 containers: []
	W1222 23:00:40.782207  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:40.782259  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:40.802303  158374 logs.go:282] 0 containers: []
	W1222 23:00:40.802318  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:40.802365  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:40.821101  158374 logs.go:282] 0 containers: []
	W1222 23:00:40.821115  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:40.821159  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:40.839813  158374 logs.go:282] 0 containers: []
	W1222 23:00:40.839830  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:40.839880  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:40.859473  158374 logs.go:282] 0 containers: []
	W1222 23:00:40.859490  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:40.859546  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:40.877060  158374 logs.go:282] 0 containers: []
	W1222 23:00:40.877076  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:40.877088  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:40.877101  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:40.922835  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:40.922852  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:40.938346  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:40.938361  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:40.993518  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:40.986693   26438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:40.987159   26438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:40.988687   26438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:40.989134   26438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:40.990683   26438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:40.986693   26438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:40.987159   26438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:40.988687   26438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:40.989134   26438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:40.990683   26438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:40.993531  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:40.993542  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:41.023093  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:41.023109  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:43.552889  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:43.564108  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:43.582897  158374 logs.go:282] 0 containers: []
	W1222 23:00:43.582914  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:43.582969  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:43.601736  158374 logs.go:282] 0 containers: []
	W1222 23:00:43.601750  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:43.601793  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:43.620069  158374 logs.go:282] 0 containers: []
	W1222 23:00:43.620083  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:43.620126  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:43.638251  158374 logs.go:282] 0 containers: []
	W1222 23:00:43.638269  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:43.638335  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:43.656816  158374 logs.go:282] 0 containers: []
	W1222 23:00:43.656829  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:43.656878  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:43.675289  158374 logs.go:282] 0 containers: []
	W1222 23:00:43.675302  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:43.675354  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:43.693789  158374 logs.go:282] 0 containers: []
	W1222 23:00:43.693805  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:43.693817  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:43.693828  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:43.749062  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:43.742036   26577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:43.742643   26577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:43.744184   26577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:43.744673   26577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:43.746198   26577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:43.742036   26577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:43.742643   26577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:43.744184   26577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:43.744673   26577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:43.746198   26577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:43.749078  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:43.749091  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:43.779321  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:43.779346  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:43.808611  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:43.808632  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:43.853216  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:43.853235  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:46.369073  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:46.380061  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:46.398781  158374 logs.go:282] 0 containers: []
	W1222 23:00:46.398797  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:46.398851  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:46.416817  158374 logs.go:282] 0 containers: []
	W1222 23:00:46.416834  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:46.416877  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:46.434863  158374 logs.go:282] 0 containers: []
	W1222 23:00:46.434877  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:46.434923  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:46.453147  158374 logs.go:282] 0 containers: []
	W1222 23:00:46.453164  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:46.453208  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:46.471210  158374 logs.go:282] 0 containers: []
	W1222 23:00:46.471224  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:46.471272  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:46.489455  158374 logs.go:282] 0 containers: []
	W1222 23:00:46.489468  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:46.489517  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:46.508022  158374 logs.go:282] 0 containers: []
	W1222 23:00:46.508039  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:46.508050  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:46.508061  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:46.555488  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:46.555505  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:46.571399  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:46.571415  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:46.626809  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:46.620219   26740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:46.620751   26740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:46.622257   26740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:46.622660   26740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:46.624126   26740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:46.620219   26740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:46.620751   26740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:46.622257   26740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:46.622660   26740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:46.624126   26740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:46.626822  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:46.626834  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:46.656631  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:46.656648  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:49.197674  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:49.208617  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:49.228203  158374 logs.go:282] 0 containers: []
	W1222 23:00:49.228221  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:49.228267  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:49.246623  158374 logs.go:282] 0 containers: []
	W1222 23:00:49.246638  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:49.246676  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:49.264794  158374 logs.go:282] 0 containers: []
	W1222 23:00:49.264810  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:49.264861  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:49.283414  158374 logs.go:282] 0 containers: []
	W1222 23:00:49.283431  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:49.283480  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:49.301735  158374 logs.go:282] 0 containers: []
	W1222 23:00:49.301748  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:49.301787  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:49.320079  158374 logs.go:282] 0 containers: []
	W1222 23:00:49.320092  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:49.320134  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:49.339280  158374 logs.go:282] 0 containers: []
	W1222 23:00:49.339296  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:49.339308  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:49.339324  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:49.354937  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:49.354953  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:49.409733  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:49.402775   26901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:49.403250   26901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:49.404843   26901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:49.405306   26901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:49.406844   26901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:49.402775   26901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:49.403250   26901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:49.404843   26901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:49.405306   26901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:49.406844   26901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:49.409744  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:49.409753  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:49.438748  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:49.438765  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:49.466948  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:49.466964  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:52.015822  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:52.027799  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:52.047753  158374 logs.go:282] 0 containers: []
	W1222 23:00:52.047770  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:52.047825  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:52.067486  158374 logs.go:282] 0 containers: []
	W1222 23:00:52.067502  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:52.067557  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:52.086094  158374 logs.go:282] 0 containers: []
	W1222 23:00:52.086110  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:52.086158  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:52.105497  158374 logs.go:282] 0 containers: []
	W1222 23:00:52.105513  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:52.105560  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:52.123560  158374 logs.go:282] 0 containers: []
	W1222 23:00:52.123575  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:52.123641  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:52.141887  158374 logs.go:282] 0 containers: []
	W1222 23:00:52.141905  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:52.141957  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:52.160464  158374 logs.go:282] 0 containers: []
	W1222 23:00:52.160480  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:52.160491  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:52.160500  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:52.207605  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:52.207623  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:52.222700  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:52.222714  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:52.277875  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:52.270341   27059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:52.270915   27059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:52.272985   27059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:52.273548   27059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:52.275027   27059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:52.270341   27059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:52.270915   27059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:52.272985   27059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:52.273548   27059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:52.275027   27059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:52.277887  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:52.277899  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:52.307146  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:52.307163  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:54.836681  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:54.847631  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:54.866945  158374 logs.go:282] 0 containers: []
	W1222 23:00:54.866961  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:54.867018  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:54.885478  158374 logs.go:282] 0 containers: []
	W1222 23:00:54.885491  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:54.885540  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:54.904643  158374 logs.go:282] 0 containers: []
	W1222 23:00:54.904657  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:54.904701  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:54.923539  158374 logs.go:282] 0 containers: []
	W1222 23:00:54.923554  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:54.923615  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:54.941322  158374 logs.go:282] 0 containers: []
	W1222 23:00:54.941338  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:54.941399  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:54.961766  158374 logs.go:282] 0 containers: []
	W1222 23:00:54.961785  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:54.961839  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:54.981417  158374 logs.go:282] 0 containers: []
	W1222 23:00:54.981432  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:54.981442  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:54.981452  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:55.012286  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:55.012306  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:55.043682  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:55.043703  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:55.091444  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:55.091466  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:55.107015  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:55.107039  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:55.162701  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:55.155754   27231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:55.156265   27231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:55.157880   27231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:55.158349   27231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:55.159815   27231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:55.155754   27231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:55.156265   27231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:55.157880   27231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:55.158349   27231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:55.159815   27231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:57.664308  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:57.675489  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:57.692858  158374 logs.go:282] 0 containers: []
	W1222 23:00:57.692875  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:57.692935  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:57.712522  158374 logs.go:282] 0 containers: []
	W1222 23:00:57.712538  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:57.712607  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:57.732210  158374 logs.go:282] 0 containers: []
	W1222 23:00:57.732226  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:57.732269  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:57.751532  158374 logs.go:282] 0 containers: []
	W1222 23:00:57.751545  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:57.751602  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:57.772243  158374 logs.go:282] 0 containers: []
	W1222 23:00:57.772257  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:57.772301  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:57.791227  158374 logs.go:282] 0 containers: []
	W1222 23:00:57.791243  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:57.791304  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:57.810525  158374 logs.go:282] 0 containers: []
	W1222 23:00:57.810543  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:57.810552  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:57.810561  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:57.858495  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:57.858513  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:57.873762  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:57.873777  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:57.929650  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:57.922350   27355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:57.922945   27355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:57.924490   27355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:57.924968   27355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:57.926535   27355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:57.922350   27355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:57.922945   27355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:57.924490   27355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:57.924968   27355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:57.926535   27355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:57.929662  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:57.929672  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:57.960293  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:57.960310  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:00.491408  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:00.502843  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:00.522074  158374 logs.go:282] 0 containers: []
	W1222 23:01:00.522090  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:00.522138  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:00.540871  158374 logs.go:282] 0 containers: []
	W1222 23:01:00.540888  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:00.540945  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:00.558913  158374 logs.go:282] 0 containers: []
	W1222 23:01:00.558931  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:00.558975  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:00.577980  158374 logs.go:282] 0 containers: []
	W1222 23:01:00.577997  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:00.578050  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:00.597037  158374 logs.go:282] 0 containers: []
	W1222 23:01:00.597056  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:00.597104  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:00.615867  158374 logs.go:282] 0 containers: []
	W1222 23:01:00.615881  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:00.615924  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:00.634566  158374 logs.go:282] 0 containers: []
	W1222 23:01:00.634581  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:00.634609  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:00.634623  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:00.663403  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:00.663425  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:00.712341  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:00.712364  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:00.729099  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:00.729121  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:00.785283  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:00.778009   27527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:00.778541   27527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:00.780082   27527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:00.780546   27527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:00.782086   27527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:00.778009   27527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:00.778541   27527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:00.780082   27527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:00.780546   27527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:00.782086   27527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:00.785294  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:00.785307  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:03.318166  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:03.329041  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:03.347864  158374 logs.go:282] 0 containers: []
	W1222 23:01:03.347878  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:03.347940  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:03.366921  158374 logs.go:282] 0 containers: []
	W1222 23:01:03.366937  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:03.366991  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:03.384054  158374 logs.go:282] 0 containers: []
	W1222 23:01:03.384070  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:03.384117  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:03.403365  158374 logs.go:282] 0 containers: []
	W1222 23:01:03.403380  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:03.403432  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:03.422487  158374 logs.go:282] 0 containers: []
	W1222 23:01:03.422501  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:03.422556  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:03.440736  158374 logs.go:282] 0 containers: []
	W1222 23:01:03.440751  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:03.440805  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:03.459891  158374 logs.go:282] 0 containers: []
	W1222 23:01:03.459906  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:03.459915  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:03.459926  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:03.508624  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:03.508646  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:03.525926  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:03.525946  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:03.581788  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:03.574395   27664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:03.574955   27664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:03.576542   27664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:03.577046   27664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:03.578534   27664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:03.574395   27664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:03.574955   27664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:03.576542   27664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:03.577046   27664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:03.578534   27664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:03.581798  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:03.581809  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:03.610547  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:03.610564  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:06.140960  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:06.152054  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:06.171277  158374 logs.go:282] 0 containers: []
	W1222 23:01:06.171290  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:06.171346  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:06.190005  158374 logs.go:282] 0 containers: []
	W1222 23:01:06.190024  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:06.190073  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:06.209033  158374 logs.go:282] 0 containers: []
	W1222 23:01:06.209052  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:06.209119  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:06.228351  158374 logs.go:282] 0 containers: []
	W1222 23:01:06.228368  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:06.228424  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:06.248668  158374 logs.go:282] 0 containers: []
	W1222 23:01:06.248682  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:06.248737  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:06.269569  158374 logs.go:282] 0 containers: []
	W1222 23:01:06.269587  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:06.269662  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:06.291826  158374 logs.go:282] 0 containers: []
	W1222 23:01:06.291841  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:06.291851  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:06.291861  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:06.337818  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:06.337839  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:06.353395  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:06.353413  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:06.410648  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:06.403406   27819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:06.403979   27819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:06.405629   27819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:06.406063   27819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:06.407632   27819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:06.403406   27819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:06.403979   27819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:06.405629   27819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:06.406063   27819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:06.407632   27819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:06.410666  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:06.410681  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:06.440427  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:06.440447  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:08.971014  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:08.982172  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:09.001296  158374 logs.go:282] 0 containers: []
	W1222 23:01:09.001315  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:09.001377  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:09.021010  158374 logs.go:282] 0 containers: []
	W1222 23:01:09.021025  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:09.021065  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:09.039299  158374 logs.go:282] 0 containers: []
	W1222 23:01:09.039315  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:09.039361  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:09.059055  158374 logs.go:282] 0 containers: []
	W1222 23:01:09.059069  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:09.059119  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:09.079065  158374 logs.go:282] 0 containers: []
	W1222 23:01:09.079080  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:09.079123  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:09.098129  158374 logs.go:282] 0 containers: []
	W1222 23:01:09.098148  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:09.098194  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:09.116949  158374 logs.go:282] 0 containers: []
	W1222 23:01:09.116965  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:09.116974  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:09.116984  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:09.172761  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:09.165842   27961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:09.166366   27961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:09.167907   27961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:09.168380   27961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:09.169881   27961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:09.165842   27961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:09.166366   27961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:09.167907   27961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:09.168380   27961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:09.169881   27961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:09.172771  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:09.172782  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:09.203094  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:09.203115  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:09.231372  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:09.231389  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:09.279710  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:09.279730  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:11.797972  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:11.809159  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:11.828406  158374 logs.go:282] 0 containers: []
	W1222 23:01:11.828423  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:11.828474  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:11.847173  158374 logs.go:282] 0 containers: []
	W1222 23:01:11.847194  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:11.847248  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:11.866123  158374 logs.go:282] 0 containers: []
	W1222 23:01:11.866141  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:11.866191  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:11.885374  158374 logs.go:282] 0 containers: []
	W1222 23:01:11.885388  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:11.885428  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:11.904372  158374 logs.go:282] 0 containers: []
	W1222 23:01:11.904386  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:11.904429  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:11.923420  158374 logs.go:282] 0 containers: []
	W1222 23:01:11.923437  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:11.923496  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:11.942294  158374 logs.go:282] 0 containers: []
	W1222 23:01:11.942332  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:11.942344  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:11.942356  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:11.999449  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:11.992239   28121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:11.992816   28121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:11.994388   28121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:11.994771   28121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:11.996307   28121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:11.992239   28121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:11.992816   28121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:11.994388   28121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:11.994771   28121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:11.996307   28121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:11.999465  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:11.999478  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:12.029498  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:12.029524  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:12.058708  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:12.058726  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:12.106818  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:12.106837  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:14.624265  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:14.635338  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:14.654466  158374 logs.go:282] 0 containers: []
	W1222 23:01:14.654481  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:14.654523  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:14.674801  158374 logs.go:282] 0 containers: []
	W1222 23:01:14.674817  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:14.674860  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:14.694258  158374 logs.go:282] 0 containers: []
	W1222 23:01:14.694275  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:14.694322  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:14.714916  158374 logs.go:282] 0 containers: []
	W1222 23:01:14.714932  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:14.714980  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:14.734141  158374 logs.go:282] 0 containers: []
	W1222 23:01:14.734155  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:14.734198  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:14.754093  158374 logs.go:282] 0 containers: []
	W1222 23:01:14.754108  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:14.754162  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:14.773451  158374 logs.go:282] 0 containers: []
	W1222 23:01:14.773468  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:14.773481  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:14.773496  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:14.830750  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:14.823428   28278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:14.824043   28278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:14.825689   28278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:14.826129   28278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:14.827703   28278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:14.823428   28278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:14.824043   28278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:14.825689   28278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:14.826129   28278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:14.827703   28278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:14.830760  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:14.830770  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:14.859787  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:14.859804  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:14.888611  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:14.888631  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:14.936097  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:14.936118  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:17.453191  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:17.464732  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:17.485010  158374 logs.go:282] 0 containers: []
	W1222 23:01:17.485025  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:17.485072  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:17.505952  158374 logs.go:282] 0 containers: []
	W1222 23:01:17.505969  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:17.506027  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:17.528761  158374 logs.go:282] 0 containers: []
	W1222 23:01:17.528776  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:17.528821  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:17.549296  158374 logs.go:282] 0 containers: []
	W1222 23:01:17.549312  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:17.549376  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:17.568100  158374 logs.go:282] 0 containers: []
	W1222 23:01:17.568117  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:17.568167  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:17.587017  158374 logs.go:282] 0 containers: []
	W1222 23:01:17.587034  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:17.587086  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:17.606020  158374 logs.go:282] 0 containers: []
	W1222 23:01:17.606036  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:17.606045  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:17.606061  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:17.621414  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:17.621430  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:17.677909  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:17.670771   28438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:17.671262   28438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:17.672895   28438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:17.673340   28438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:17.674869   28438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:17.670771   28438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:17.671262   28438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:17.672895   28438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:17.673340   28438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:17.674869   28438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:17.677925  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:17.677935  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:17.708117  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:17.708138  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:17.739554  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:17.739574  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:20.288948  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:20.299810  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:20.318929  158374 logs.go:282] 0 containers: []
	W1222 23:01:20.318945  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:20.319006  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:20.337556  158374 logs.go:282] 0 containers: []
	W1222 23:01:20.337573  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:20.337641  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:20.355705  158374 logs.go:282] 0 containers: []
	W1222 23:01:20.355718  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:20.355760  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:20.373672  158374 logs.go:282] 0 containers: []
	W1222 23:01:20.373686  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:20.373726  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:20.392616  158374 logs.go:282] 0 containers: []
	W1222 23:01:20.392631  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:20.392674  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:20.411253  158374 logs.go:282] 0 containers: []
	W1222 23:01:20.411270  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:20.411322  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:20.429537  158374 logs.go:282] 0 containers: []
	W1222 23:01:20.429552  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:20.429563  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:20.429575  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:20.445080  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:20.445098  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:20.501506  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:20.494080   28582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:20.494697   28582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:20.496376   28582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:20.496836   28582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:20.498365   28582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:20.494080   28582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:20.494697   28582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:20.496376   28582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:20.496836   28582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:20.498365   28582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:20.501520  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:20.501535  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:20.531907  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:20.531925  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:20.560530  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:20.560547  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:23.108846  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:23.119974  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:23.138613  158374 logs.go:282] 0 containers: []
	W1222 23:01:23.138631  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:23.138686  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:23.156921  158374 logs.go:282] 0 containers: []
	W1222 23:01:23.156935  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:23.156975  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:23.175153  158374 logs.go:282] 0 containers: []
	W1222 23:01:23.175166  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:23.175209  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:23.193263  158374 logs.go:282] 0 containers: []
	W1222 23:01:23.193295  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:23.193356  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:23.212207  158374 logs.go:282] 0 containers: []
	W1222 23:01:23.212221  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:23.212263  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:23.230929  158374 logs.go:282] 0 containers: []
	W1222 23:01:23.230945  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:23.231005  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:23.249646  158374 logs.go:282] 0 containers: []
	W1222 23:01:23.249659  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:23.249669  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:23.249679  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:23.277729  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:23.277745  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:23.324063  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:23.324082  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:23.339406  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:23.339425  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:23.394875  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:23.387312   28761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:23.387859   28761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:23.389847   28761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:23.390576   28761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:23.392042   28761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:23.387312   28761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:23.387859   28761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:23.389847   28761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:23.390576   28761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:23.392042   28761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:23.394885  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:23.394895  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:25.926075  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:25.937168  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:25.956016  158374 logs.go:282] 0 containers: []
	W1222 23:01:25.956028  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:25.956074  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:25.974143  158374 logs.go:282] 0 containers: []
	W1222 23:01:25.974159  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:25.974202  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:25.992434  158374 logs.go:282] 0 containers: []
	W1222 23:01:25.992449  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:25.992505  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:26.010399  158374 logs.go:282] 0 containers: []
	W1222 23:01:26.010415  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:26.010474  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:26.029958  158374 logs.go:282] 0 containers: []
	W1222 23:01:26.029974  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:26.030039  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:26.048962  158374 logs.go:282] 0 containers: []
	W1222 23:01:26.048977  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:26.049032  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:26.067563  158374 logs.go:282] 0 containers: []
	W1222 23:01:26.067581  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:26.067605  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:26.067627  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:26.115800  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:26.115818  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:26.131143  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:26.131160  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:26.187038  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:26.179580   28901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:26.180759   28901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:26.181222   28901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:26.182757   28901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:26.183141   28901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:26.179580   28901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:26.180759   28901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:26.181222   28901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:26.182757   28901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:26.183141   28901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:26.187049  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:26.187059  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:26.217787  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:26.217807  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:28.746733  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:28.757902  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:28.777575  158374 logs.go:282] 0 containers: []
	W1222 23:01:28.777608  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:28.777664  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:28.798343  158374 logs.go:282] 0 containers: []
	W1222 23:01:28.798363  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:28.798420  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:28.816843  158374 logs.go:282] 0 containers: []
	W1222 23:01:28.816859  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:28.816904  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:28.835441  158374 logs.go:282] 0 containers: []
	W1222 23:01:28.835458  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:28.835507  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:28.855127  158374 logs.go:282] 0 containers: []
	W1222 23:01:28.855139  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:28.855195  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:28.873368  158374 logs.go:282] 0 containers: []
	W1222 23:01:28.873381  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:28.873422  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:28.890543  158374 logs.go:282] 0 containers: []
	W1222 23:01:28.890556  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:28.890565  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:28.890575  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:28.918533  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:28.918553  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:28.963368  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:28.963389  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:28.978933  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:28.978951  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:29.035020  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:29.027770   29072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:29.028286   29072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:29.029825   29072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:29.030294   29072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:29.031906   29072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:29.027770   29072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:29.028286   29072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:29.029825   29072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:29.030294   29072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:29.031906   29072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:29.035033  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:29.035044  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:31.565580  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:31.576645  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:31.596094  158374 logs.go:282] 0 containers: []
	W1222 23:01:31.596110  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:31.596173  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:31.615329  158374 logs.go:282] 0 containers: []
	W1222 23:01:31.615345  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:31.615394  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:31.634047  158374 logs.go:282] 0 containers: []
	W1222 23:01:31.634065  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:31.634117  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:31.653553  158374 logs.go:282] 0 containers: []
	W1222 23:01:31.653567  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:31.653626  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:31.671338  158374 logs.go:282] 0 containers: []
	W1222 23:01:31.671354  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:31.671411  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:31.689466  158374 logs.go:282] 0 containers: []
	W1222 23:01:31.689482  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:31.689536  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:31.707731  158374 logs.go:282] 0 containers: []
	W1222 23:01:31.707744  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:31.707753  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:31.707761  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:31.759802  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:31.759828  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:31.776809  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:31.776828  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:31.833983  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:31.827071   29214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:31.827553   29214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:31.829051   29214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:31.829424   29214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:31.830959   29214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:31.827071   29214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:31.827553   29214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:31.829051   29214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:31.829424   29214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:31.830959   29214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:31.833996  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:31.834010  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:31.862984  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:31.863002  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:34.392435  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:34.403543  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:34.422799  158374 logs.go:282] 0 containers: []
	W1222 23:01:34.422814  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:34.422857  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:34.442014  158374 logs.go:282] 0 containers: []
	W1222 23:01:34.442029  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:34.442076  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:34.460995  158374 logs.go:282] 0 containers: []
	W1222 23:01:34.461007  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:34.461056  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:34.479671  158374 logs.go:282] 0 containers: []
	W1222 23:01:34.479688  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:34.479737  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:34.498097  158374 logs.go:282] 0 containers: []
	W1222 23:01:34.498113  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:34.498169  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:34.515924  158374 logs.go:282] 0 containers: []
	W1222 23:01:34.515939  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:34.515985  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:34.534433  158374 logs.go:282] 0 containers: []
	W1222 23:01:34.534447  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:34.534455  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:34.534466  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:34.589495  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:34.581716   29354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:34.582244   29354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:34.583869   29354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:34.584305   29354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:34.586383   29354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:34.581716   29354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:34.582244   29354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:34.583869   29354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:34.584305   29354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:34.586383   29354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:34.589505  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:34.589515  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:34.619333  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:34.619352  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:34.647369  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:34.647385  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:34.692228  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:34.692249  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:37.207836  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:37.218829  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:37.237894  158374 logs.go:282] 0 containers: []
	W1222 23:01:37.237908  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:37.237953  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:37.256000  158374 logs.go:282] 0 containers: []
	W1222 23:01:37.256012  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:37.256054  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:37.275097  158374 logs.go:282] 0 containers: []
	W1222 23:01:37.275112  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:37.275161  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:37.294044  158374 logs.go:282] 0 containers: []
	W1222 23:01:37.294059  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:37.294099  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:37.312979  158374 logs.go:282] 0 containers: []
	W1222 23:01:37.312995  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:37.313041  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:37.331662  158374 logs.go:282] 0 containers: []
	W1222 23:01:37.331675  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:37.331718  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:37.350197  158374 logs.go:282] 0 containers: []
	W1222 23:01:37.350212  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:37.350223  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:37.350233  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:37.397328  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:37.397345  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:37.412835  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:37.412852  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:37.468475  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:37.461747   29517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:37.462295   29517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:37.463832   29517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:37.464240   29517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:37.465745   29517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:37.461747   29517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:37.462295   29517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:37.463832   29517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:37.464240   29517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:37.465745   29517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:37.468487  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:37.468498  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:37.497915  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:37.497934  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:40.027516  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:40.039728  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:40.058705  158374 logs.go:282] 0 containers: []
	W1222 23:01:40.058719  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:40.058762  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:40.076167  158374 logs.go:282] 0 containers: []
	W1222 23:01:40.076184  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:40.076231  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:40.095983  158374 logs.go:282] 0 containers: []
	W1222 23:01:40.095996  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:40.096037  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:40.114657  158374 logs.go:282] 0 containers: []
	W1222 23:01:40.114670  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:40.114717  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:40.133015  158374 logs.go:282] 0 containers: []
	W1222 23:01:40.133028  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:40.133070  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:40.152127  158374 logs.go:282] 0 containers: []
	W1222 23:01:40.152140  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:40.152187  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:40.169569  158374 logs.go:282] 0 containers: []
	W1222 23:01:40.169583  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:40.169622  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:40.169636  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:40.184978  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:40.184992  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:40.238860  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:40.231964   29674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:40.232541   29674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:40.234087   29674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:40.234491   29674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:40.236049   29674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:40.231964   29674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:40.232541   29674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:40.234087   29674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:40.234491   29674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:40.236049   29674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:40.238870  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:40.238879  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:40.268146  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:40.268165  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:40.295931  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:40.295946  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:42.843708  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:42.854816  158374 kubeadm.go:602] duration metric: took 4m1.499731906s to restartPrimaryControlPlane
	W1222 23:01:42.854901  158374 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1222 23:01:42.854978  158374 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1222 23:01:43.257733  158374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 23:01:43.270528  158374 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 23:01:43.278400  158374 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 23:01:43.278439  158374 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 23:01:43.285901  158374 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 23:01:43.285910  158374 kubeadm.go:158] found existing configuration files:
	
	I1222 23:01:43.285947  158374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1222 23:01:43.293882  158374 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 23:01:43.293919  158374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 23:01:43.300825  158374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1222 23:01:43.307941  158374 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 23:01:43.307983  158374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 23:01:43.314784  158374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1222 23:01:43.321699  158374 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 23:01:43.321728  158374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 23:01:43.328397  158374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1222 23:01:43.335272  158374 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 23:01:43.335301  158374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 23:01:43.341949  158374 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 23:01:43.376034  158374 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 23:01:43.376102  158374 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 23:01:43.445165  158374 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 23:01:43.445236  158374 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1222 23:01:43.445264  158374 kubeadm.go:319] OS: Linux
	I1222 23:01:43.445301  158374 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 23:01:43.445350  158374 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 23:01:43.445392  158374 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 23:01:43.445455  158374 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 23:01:43.445494  158374 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 23:01:43.445551  158374 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 23:01:43.445588  158374 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 23:01:43.445673  158374 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 23:01:43.445736  158374 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 23:01:43.500219  158374 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 23:01:43.500396  158374 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 23:01:43.500507  158374 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 23:01:43.513368  158374 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 23:01:43.515533  158374 out.go:252]   - Generating certificates and keys ...
	I1222 23:01:43.515634  158374 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 23:01:43.515681  158374 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 23:01:43.515765  158374 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1222 23:01:43.515820  158374 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1222 23:01:43.515882  158374 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1222 23:01:43.515924  158374 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1222 23:01:43.515975  158374 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1222 23:01:43.516024  158374 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1222 23:01:43.516083  158374 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1222 23:01:43.516151  158374 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1222 23:01:43.516181  158374 kubeadm.go:319] [certs] Using the existing "sa" key
	I1222 23:01:43.516264  158374 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 23:01:43.648060  158374 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 23:01:43.775539  158374 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 23:01:43.806099  158374 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 23:01:43.912171  158374 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 23:01:44.004004  158374 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 23:01:44.004296  158374 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 23:01:44.006366  158374 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 23:01:44.008239  158374 out.go:252]   - Booting up control plane ...
	I1222 23:01:44.008308  158374 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 23:01:44.008365  158374 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 23:01:44.009041  158374 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 23:01:44.026974  158374 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 23:01:44.027088  158374 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 23:01:44.033700  158374 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 23:01:44.033947  158374 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 23:01:44.034017  158374 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 23:01:44.136722  158374 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 23:01:44.136861  158374 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 23:05:44.137086  158374 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000408574s
	I1222 23:05:44.137129  158374 kubeadm.go:319] 
	I1222 23:05:44.137190  158374 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 23:05:44.137216  158374 kubeadm.go:319] 	- The kubelet is not running
	I1222 23:05:44.137303  158374 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 23:05:44.137309  158374 kubeadm.go:319] 
	I1222 23:05:44.137438  158374 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 23:05:44.137492  158374 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 23:05:44.137528  158374 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 23:05:44.137531  158374 kubeadm.go:319] 
	I1222 23:05:44.140373  158374 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1222 23:05:44.140752  158374 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 23:05:44.140849  158374 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 23:05:44.141147  158374 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1222 23:05:44.141156  158374 kubeadm.go:319] 
	I1222 23:05:44.141230  158374 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1222 23:05:44.141360  158374 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000408574s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1222 23:05:44.141451  158374 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1222 23:05:44.555201  158374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 23:05:44.567871  158374 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 23:05:44.567915  158374 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 23:05:44.575883  158374 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 23:05:44.575897  158374 kubeadm.go:158] found existing configuration files:
	
	I1222 23:05:44.575941  158374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1222 23:05:44.583486  158374 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 23:05:44.583527  158374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 23:05:44.590649  158374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1222 23:05:44.597769  158374 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 23:05:44.597806  158374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 23:05:44.604798  158374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1222 23:05:44.611986  158374 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 23:05:44.612034  158374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 23:05:44.619193  158374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1222 23:05:44.626515  158374 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 23:05:44.626555  158374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 23:05:44.633629  158374 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 23:05:44.735033  158374 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1222 23:05:44.735554  158374 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 23:05:44.792296  158374 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 23:09:45.398895  158374 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1222 23:09:45.398970  158374 kubeadm.go:319] 
	I1222 23:09:45.399090  158374 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1222 23:09:45.401586  158374 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 23:09:45.401671  158374 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 23:09:45.401745  158374 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 23:09:45.401791  158374 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1222 23:09:45.401820  158374 kubeadm.go:319] OS: Linux
	I1222 23:09:45.401885  158374 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 23:09:45.401955  158374 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 23:09:45.402023  158374 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 23:09:45.402088  158374 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 23:09:45.402152  158374 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 23:09:45.402201  158374 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 23:09:45.402235  158374 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 23:09:45.402274  158374 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 23:09:45.402309  158374 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 23:09:45.402367  158374 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 23:09:45.402449  158374 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 23:09:45.402536  158374 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 23:09:45.402585  158374 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 23:09:45.404239  158374 out.go:252]   - Generating certificates and keys ...
	I1222 23:09:45.404310  158374 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 23:09:45.404360  158374 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 23:09:45.404421  158374 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1222 23:09:45.404472  158374 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1222 23:09:45.404530  158374 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1222 23:09:45.404569  158374 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1222 23:09:45.404650  158374 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1222 23:09:45.404705  158374 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1222 23:09:45.404761  158374 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1222 23:09:45.404827  158374 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1222 23:09:45.404867  158374 kubeadm.go:319] [certs] Using the existing "sa" key
	I1222 23:09:45.404910  158374 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 23:09:45.404948  158374 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 23:09:45.404989  158374 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 23:09:45.405029  158374 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 23:09:45.405075  158374 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 23:09:45.405115  158374 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 23:09:45.405181  158374 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 23:09:45.405240  158374 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 23:09:45.406503  158374 out.go:252]   - Booting up control plane ...
	I1222 23:09:45.406585  158374 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 23:09:45.406677  158374 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 23:09:45.406738  158374 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 23:09:45.406832  158374 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 23:09:45.406905  158374 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 23:09:45.406993  158374 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 23:09:45.407062  158374 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 23:09:45.407092  158374 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 23:09:45.407211  158374 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 23:09:45.407300  158374 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 23:09:45.407348  158374 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000888774s
	I1222 23:09:45.407350  158374 kubeadm.go:319] 
	I1222 23:09:45.407409  158374 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 23:09:45.407435  158374 kubeadm.go:319] 	- The kubelet is not running
	I1222 23:09:45.407521  158374 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 23:09:45.407524  158374 kubeadm.go:319] 
	I1222 23:09:45.407628  158374 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 23:09:45.407652  158374 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 23:09:45.407675  158374 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 23:09:45.407697  158374 kubeadm.go:319] 
	I1222 23:09:45.407753  158374 kubeadm.go:403] duration metric: took 12m4.079935698s to StartCluster
	I1222 23:09:45.407873  158374 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 23:09:45.407938  158374 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 23:09:45.445003  158374 cri.go:96] found id: ""
	I1222 23:09:45.445021  158374 logs.go:282] 0 containers: []
	W1222 23:09:45.445027  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:09:45.445038  158374 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 23:09:45.445084  158374 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 23:09:45.470772  158374 cri.go:96] found id: ""
	I1222 23:09:45.470788  158374 logs.go:282] 0 containers: []
	W1222 23:09:45.470794  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:09:45.470799  158374 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 23:09:45.470845  158374 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 23:09:45.495903  158374 cri.go:96] found id: ""
	I1222 23:09:45.495920  158374 logs.go:282] 0 containers: []
	W1222 23:09:45.495927  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:09:45.495933  158374 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 23:09:45.495983  158374 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 23:09:45.523926  158374 cri.go:96] found id: ""
	I1222 23:09:45.523943  158374 logs.go:282] 0 containers: []
	W1222 23:09:45.523952  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:09:45.523960  158374 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 23:09:45.524021  158374 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 23:09:45.551137  158374 cri.go:96] found id: ""
	I1222 23:09:45.551153  158374 logs.go:282] 0 containers: []
	W1222 23:09:45.551164  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:09:45.551171  158374 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 23:09:45.551226  158374 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 23:09:45.576565  158374 cri.go:96] found id: ""
	I1222 23:09:45.576583  158374 logs.go:282] 0 containers: []
	W1222 23:09:45.576611  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:09:45.576621  158374 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 23:09:45.576676  158374 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 23:09:45.601969  158374 cri.go:96] found id: ""
	I1222 23:09:45.601983  158374 logs.go:282] 0 containers: []
	W1222 23:09:45.601991  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:09:45.602003  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:09:45.602018  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:09:45.650169  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:09:45.650193  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:09:45.665853  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:09:45.665870  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:09:45.722796  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:09:45.715772   37272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:09:45.716296   37272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:09:45.717847   37272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:09:45.718263   37272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:09:45.719823   37272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:09:45.715772   37272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:09:45.716296   37272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:09:45.717847   37272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:09:45.718263   37272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:09:45.719823   37272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:09:45.722812  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:09:45.722824  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:09:45.751752  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:09:45.751775  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 23:09:45.780185  158374 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000888774s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1222 23:09:45.780232  158374 out.go:285] * 
	W1222 23:09:45.780327  158374 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000888774s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 23:09:45.780349  158374 out.go:285] * 
	W1222 23:09:45.780644  158374 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 23:09:45.784145  158374 out.go:203] 
	W1222 23:09:45.785152  158374 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000888774s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 23:09:45.785198  158374 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1222 23:09:45.785226  158374 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1222 23:09:45.786470  158374 out.go:203] 
	
	
	==> Docker <==
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.415844375Z" level=info msg="Loading containers: done."
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.427335656Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.427378701Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.427416508Z" level=info msg="Initializing buildkit"
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.445627752Z" level=info msg="Completed buildkit initialization"
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.450569156Z" level=info msg="Daemon has completed initialization"
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.450666768Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.450705582Z" level=info msg="API listen on /run/docker.sock"
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.450724731Z" level=info msg="API listen on [::]:2376"
	Dec 22 22:57:39 functional-384766 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 22 22:57:39 functional-384766 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 22 22:57:39 functional-384766 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 22 22:57:39 functional-384766 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 22 22:57:39 functional-384766 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Start docker client with request timeout 0s"
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Loaded network plugin cni"
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 22 22:57:39 functional-384766 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:09:59.349925   39141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:09:59.350425   39141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:09:59.352004   39141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:09:59.352399   39141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:09:59.353921   39141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff da 9e 7f a3 27 cb 08 06
	[  +0.239045] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6e eb f7 fd 0a 48 08 06
	[  +0.170967] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 16 5a dc 65 fc cc 08 06
	[Dec22 22:37] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 66 cb ee 90 55 2b 08 06
	[  +0.000450] IPv4: martian source 10.244.0.32 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff be 43 50 0c dd 15 08 06
	[  +0.000658] IPv4: martian source 10.244.0.32 from 10.244.0.7, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e 41 3c 76 23 2b 08 06
	[  +1.709294] IPv4: martian source 10.244.0.31 from 10.244.0.26, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff be b6 30 85 5f 4e 08 06
	[  +0.532867] IPv4: martian source 10.244.0.26 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff be 43 50 0c dd 15 08 06
	[Dec22 22:39] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 46 b7 49 09 f9 e0 08 06
	[  +0.006417] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1e e5 c5 4f 67 2b 08 06
	[Dec22 22:40] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 22 2e 10 70 70 25 08 06
	[Dec22 22:41] IPv4: martian source 10.244.0.1 from 10.244.0.6, on dev eth0
	[  +0.000034] ll header: 00000000: ff ff ff ff ff ff ee d7 ae 32 ba c5 08 06
	[Dec22 22:42] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 95 cb 2f 8e 91 08 06
	
	
	==> kernel <==
	 23:09:59 up  2:52,  0 user,  load average: 0.76, 0.25, 0.37
	Linux functional-384766 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 23:09:56 functional-384766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 23:09:56 functional-384766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 335.
	Dec 22 23:09:56 functional-384766 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:09:56 functional-384766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:09:56 functional-384766 kubelet[38849]: E1222 23:09:56.806118   38849 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 23:09:56 functional-384766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 23:09:56 functional-384766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 23:09:57 functional-384766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 336.
	Dec 22 23:09:57 functional-384766 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:09:57 functional-384766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:09:57 functional-384766 kubelet[38920]: E1222 23:09:57.531705   38920 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 23:09:57 functional-384766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 23:09:57 functional-384766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 23:09:58 functional-384766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 337.
	Dec 22 23:09:58 functional-384766 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:09:58 functional-384766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:09:58 functional-384766 kubelet[38974]: E1222 23:09:58.254579   38974 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 23:09:58 functional-384766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 23:09:58 functional-384766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 23:09:58 functional-384766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 338.
	Dec 22 23:09:58 functional-384766 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:09:58 functional-384766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:09:59 functional-384766 kubelet[39021]: E1222 23:09:59.036008   39021 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 23:09:59 functional-384766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 23:09:59 functional-384766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-384766 -n functional-384766
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-384766 -n functional-384766: exit status 2 (331.492357ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-384766" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd (2.71s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect (2.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect
functional_test.go:1641: (dbg) Run:  kubectl --context functional-384766 create deployment hello-node-connect --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1641: (dbg) Non-zero exit: kubectl --context functional-384766 create deployment hello-node-connect --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server: exit status 1 (54.333128ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1643: failed to create hello-node deployment with this command "kubectl --context functional-384766 create deployment hello-node-connect --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server": exit status 1.
functional_test.go:1613: service test failed - dumping debug information
functional_test.go:1614: -----------------------service failure post-mortem--------------------------------
functional_test.go:1617: (dbg) Run:  kubectl --context functional-384766 describe po hello-node-connect
functional_test.go:1617: (dbg) Non-zero exit: kubectl --context functional-384766 describe po hello-node-connect: exit status 1 (59.924328ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1619: "kubectl --context functional-384766 describe po hello-node-connect" failed: exit status 1
functional_test.go:1621: hello-node pod describe:
functional_test.go:1623: (dbg) Run:  kubectl --context functional-384766 logs -l app=hello-node-connect
functional_test.go:1623: (dbg) Non-zero exit: kubectl --context functional-384766 logs -l app=hello-node-connect: exit status 1 (54.623224ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1625: "kubectl --context functional-384766 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1627: hello-node logs:
functional_test.go:1629: (dbg) Run:  kubectl --context functional-384766 describe svc hello-node-connect
functional_test.go:1629: (dbg) Non-zero exit: kubectl --context functional-384766 describe svc hello-node-connect: exit status 1 (56.022212ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1631: "kubectl --context functional-384766 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1633: hello-node svc describe:
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-384766
helpers_test.go:244: (dbg) docker inspect functional-384766:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c",
	        "Created": "2025-12-22T22:43:03.818900502Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 134904,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T22:43:03.847527913Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9a87e850a5e640dd3e5f71477885272b970ba271e3722be8bebbe0157f704ffd",
	        "ResolvConfPath": "/var/lib/docker/containers/e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c/hostname",
	        "HostsPath": "/var/lib/docker/containers/e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c/hosts",
	        "LogPath": "/var/lib/docker/containers/e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c/e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c-json.log",
	        "Name": "/functional-384766",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-384766:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-384766",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c",
	                "LowerDir": "/var/lib/docker/overlay2/3e3d10c0ae87018d46767d6a2bb62611a8b9a288f6938e75c60f3cd57119d4bf-init/diff:/var/lib/docker/overlay2/c57dd1a41102d99c4ed6be3c60b871435428bd2cea6a3d8d172f0a67527ba009/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3e3d10c0ae87018d46767d6a2bb62611a8b9a288f6938e75c60f3cd57119d4bf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3e3d10c0ae87018d46767d6a2bb62611a8b9a288f6938e75c60f3cd57119d4bf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3e3d10c0ae87018d46767d6a2bb62611a8b9a288f6938e75c60f3cd57119d4bf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-384766",
	                "Source": "/var/lib/docker/volumes/functional-384766/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-384766",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-384766",
	                "name.minikube.sigs.k8s.io": "functional-384766",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d6f65d275ad1e1cfaea153f23b0c094464e089c27de9a12387045fa2c863e00e",
	            "SandboxKey": "/var/run/docker/netns/d6f65d275ad1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-384766": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1b177601c4f3a252e4feb1553da3a4110e40d5b9ed2bd5de6789f2bc9f8f5c2b",
	                    "EndpointID": "2c787f98c5d836612c102f7592dc2eccfef09327c2a6cadf1319fd6559b5eca8",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "d6:90:04:78:9b:e3",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-384766",
	                        "e126b999cc06"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-384766 -n functional-384766
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-384766 -n functional-384766: exit status 2 (305.121173ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                             ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service        │ functional-580825 service hello-node --url                                                                                                                   │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ update-context │ functional-580825 update-context --alsologtostderr -v=2                                                                                                      │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ update-context │ functional-580825 update-context --alsologtostderr -v=2                                                                                                      │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ update-context │ functional-580825 update-context --alsologtostderr -v=2                                                                                                      │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image          │ functional-580825 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-580825 --alsologtostderr                                  │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ ssh            │ functional-384766 ssh sudo cat /usr/share/ca-certificates/75803.pem                                                                                          │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │ 22 Dec 25 23:09 UTC │
	│ config         │ functional-384766 config get cpus                                                                                                                            │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │                     │
	│ cp             │ functional-384766 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                                                           │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │ 22 Dec 25 23:09 UTC │
	│ ssh            │ functional-384766 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                     │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │ 22 Dec 25 23:09 UTC │
	│ ssh            │ functional-384766 ssh -n functional-384766 sudo cat /home/docker/cp-test.txt                                                                                 │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │ 22 Dec 25 23:09 UTC │
	│ ssh            │ functional-384766 ssh sudo cat /etc/ssl/certs/758032.pem                                                                                                     │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │ 22 Dec 25 23:09 UTC │
	│ cp             │ functional-384766 cp functional-384766:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelCpCm3180693308/001/cp-test.txt │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │ 22 Dec 25 23:09 UTC │
	│ ssh            │ functional-384766 ssh sudo cat /usr/share/ca-certificates/758032.pem                                                                                         │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │ 22 Dec 25 23:09 UTC │
	│ ssh            │ functional-384766 ssh -n functional-384766 sudo cat /home/docker/cp-test.txt                                                                                 │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │ 22 Dec 25 23:09 UTC │
	│ ssh            │ functional-384766 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                     │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │ 22 Dec 25 23:09 UTC │
	│ cp             │ functional-384766 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                    │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │ 22 Dec 25 23:09 UTC │
	│ ssh            │ functional-384766 ssh echo hello                                                                                                                             │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │ 22 Dec 25 23:09 UTC │
	│ ssh            │ functional-384766 ssh -n functional-384766 sudo cat /tmp/does/not/exist/cp-test.txt                                                                          │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │ 22 Dec 25 23:09 UTC │
	│ tunnel         │ functional-384766 tunnel --alsologtostderr                                                                                                                   │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │                     │
	│ tunnel         │ functional-384766 tunnel --alsologtostderr                                                                                                                   │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │                     │
	│ addons         │ functional-384766 addons list                                                                                                                                │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │ 22 Dec 25 23:09 UTC │
	│ tunnel         │ functional-384766 tunnel --alsologtostderr                                                                                                                   │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │                     │
	│ addons         │ functional-384766 addons list -o json                                                                                                                        │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │ 22 Dec 25 23:09 UTC │
	│ service        │ functional-384766 service list                                                                                                                               │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │                     │
	│ service        │ functional-384766 service list -o json                                                                                                                       │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │                     │
	└────────────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 22:57:36
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 22:57:36.254392  158374 out.go:360] Setting OutFile to fd 1 ...
	I1222 22:57:36.254700  158374 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 22:57:36.254705  158374 out.go:374] Setting ErrFile to fd 2...
	I1222 22:57:36.254708  158374 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 22:57:36.254883  158374 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-72233/.minikube/bin
	I1222 22:57:36.255420  158374 out.go:368] Setting JSON to false
	I1222 22:57:36.256374  158374 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":9596,"bootTime":1766434660,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1222 22:57:36.256458  158374 start.go:143] virtualization: kvm guest
	I1222 22:57:36.258393  158374 out.go:179] * [functional-384766] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1222 22:57:36.259562  158374 out.go:179]   - MINIKUBE_LOCATION=22301
	I1222 22:57:36.259584  158374 notify.go:221] Checking for updates...
	I1222 22:57:36.261710  158374 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 22:57:36.262944  158374 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1222 22:57:36.264212  158374 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-72233/.minikube
	I1222 22:57:36.265355  158374 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1222 22:57:36.266271  158374 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 22:57:36.267661  158374 config.go:182] Loaded profile config "functional-384766": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1222 22:57:36.267820  158374 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 22:57:36.296187  158374 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1222 22:57:36.296285  158374 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 22:57:36.350829  158374 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:57 SystemTime:2025-12-22 22:57:36.341376778 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1222 22:57:36.350930  158374 docker.go:319] overlay module found
	I1222 22:57:36.352570  158374 out.go:179] * Using the docker driver based on existing profile
	I1222 22:57:36.353588  158374 start.go:309] selected driver: docker
	I1222 22:57:36.353611  158374 start.go:928] validating driver "docker" against &{Name:functional-384766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-384766 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 22:57:36.353719  158374 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 22:57:36.353830  158374 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 22:57:36.406492  158374 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:57 SystemTime:2025-12-22 22:57:36.397760538 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1222 22:57:36.407140  158374 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1222 22:57:36.407175  158374 cni.go:84] Creating CNI manager for ""
	I1222 22:57:36.407232  158374 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1222 22:57:36.407286  158374 start.go:353] cluster config:
	{Name:functional-384766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-384766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 22:57:36.408996  158374 out.go:179] * Starting "functional-384766" primary control-plane node in "functional-384766" cluster
	I1222 22:57:36.410078  158374 cache.go:134] Beginning downloading kic base image for docker with docker
	I1222 22:57:36.411119  158374 out.go:179] * Pulling base image v0.0.48-1766394456-22288 ...
	I1222 22:57:36.412129  158374 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1222 22:57:36.412159  158374 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4
	I1222 22:57:36.412174  158374 cache.go:65] Caching tarball of preloaded images
	I1222 22:57:36.412242  158374 preload.go:251] Found /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1222 22:57:36.412248  158374 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on docker
	I1222 22:57:36.412244  158374 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local docker daemon
	I1222 22:57:36.412341  158374 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/config.json ...
	I1222 22:57:36.431941  158374 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local docker daemon, skipping pull
	I1222 22:57:36.431955  158374 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 exists in daemon, skipping load
	I1222 22:57:36.431969  158374 cache.go:243] Successfully downloaded all kic artifacts
	I1222 22:57:36.431996  158374 start.go:360] acquireMachinesLock for functional-384766: {Name:mk956fe60c71d3d96aa218ecf73d6e39f6ab1bf3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 22:57:36.432059  158374 start.go:364] duration metric: took 40.356µs to acquireMachinesLock for "functional-384766"
	I1222 22:57:36.432072  158374 start.go:96] Skipping create...Using existing machine configuration
	I1222 22:57:36.432076  158374 fix.go:54] fixHost starting: 
	I1222 22:57:36.432265  158374 cli_runner.go:164] Run: docker container inspect functional-384766 --format={{.State.Status}}
	I1222 22:57:36.449079  158374 fix.go:112] recreateIfNeeded on functional-384766: state=Running err=<nil>
	W1222 22:57:36.449100  158374 fix.go:138] unexpected machine state, will restart: <nil>
	I1222 22:57:36.450671  158374 out.go:252] * Updating the running docker "functional-384766" container ...
	I1222 22:57:36.450705  158374 machine.go:94] provisionDockerMachine start ...
	I1222 22:57:36.450764  158374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:57:36.467607  158374 main.go:144] libmachine: Using SSH client type: native
	I1222 22:57:36.467835  158374 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:57:36.467841  158374 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 22:57:36.608433  158374 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-384766
	
	I1222 22:57:36.608449  158374 ubuntu.go:182] provisioning hostname "functional-384766"
	I1222 22:57:36.608504  158374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:57:36.626300  158374 main.go:144] libmachine: Using SSH client type: native
	I1222 22:57:36.626509  158374 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:57:36.626516  158374 main.go:144] libmachine: About to run SSH command:
	sudo hostname functional-384766 && echo "functional-384766" | sudo tee /etc/hostname
	I1222 22:57:36.777413  158374 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-384766
	
	I1222 22:57:36.777486  158374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:57:36.795160  158374 main.go:144] libmachine: Using SSH client type: native
	I1222 22:57:36.795380  158374 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:57:36.795396  158374 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-384766' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-384766/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-384766' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 22:57:36.935922  158374 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 22:57:36.935942  158374 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22301-72233/.minikube CaCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22301-72233/.minikube}
	I1222 22:57:36.935957  158374 ubuntu.go:190] setting up certificates
	I1222 22:57:36.935965  158374 provision.go:84] configureAuth start
	I1222 22:57:36.936023  158374 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-384766
	I1222 22:57:36.954219  158374 provision.go:143] copyHostCerts
	I1222 22:57:36.954277  158374 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem, removing ...
	I1222 22:57:36.954291  158374 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem
	I1222 22:57:36.954367  158374 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem (1082 bytes)
	I1222 22:57:36.954466  158374 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem, removing ...
	I1222 22:57:36.954469  158374 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem
	I1222 22:57:36.954495  158374 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem (1123 bytes)
	I1222 22:57:36.954569  158374 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem, removing ...
	I1222 22:57:36.954572  158374 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem
	I1222 22:57:36.954631  158374 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem (1679 bytes)
	I1222 22:57:36.954687  158374 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem org=jenkins.functional-384766 san=[127.0.0.1 192.168.49.2 functional-384766 localhost minikube]
	I1222 22:57:36.981147  158374 provision.go:177] copyRemoteCerts
	I1222 22:57:36.981202  158374 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 22:57:36.981239  158374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:57:37.000716  158374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
	I1222 22:57:37.101499  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 22:57:37.118740  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 22:57:37.135018  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1222 22:57:37.151214  158374 provision.go:87] duration metric: took 215.234679ms to configureAuth
	I1222 22:57:37.151234  158374 ubuntu.go:206] setting minikube options for container-runtime
	I1222 22:57:37.151390  158374 config.go:182] Loaded profile config "functional-384766": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1222 22:57:37.151430  158374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:57:37.168491  158374 main.go:144] libmachine: Using SSH client type: native
	I1222 22:57:37.168730  158374 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:57:37.168737  158374 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1222 22:57:37.310361  158374 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1222 22:57:37.310376  158374 ubuntu.go:71] root file system type: overlay
	I1222 22:57:37.310489  158374 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1222 22:57:37.310547  158374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:57:37.329095  158374 main.go:144] libmachine: Using SSH client type: native
	I1222 22:57:37.329306  158374 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:57:37.329369  158374 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1222 22:57:37.478917  158374 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1222 22:57:37.478994  158374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:57:37.496454  158374 main.go:144] libmachine: Using SSH client type: native
	I1222 22:57:37.496687  158374 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:57:37.496699  158374 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1222 22:57:37.641628  158374 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 22:57:37.641654  158374 machine.go:97] duration metric: took 1.190941144s to provisionDockerMachine
	I1222 22:57:37.641665  158374 start.go:293] postStartSetup for "functional-384766" (driver="docker")
	I1222 22:57:37.641676  158374 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 22:57:37.641727  158374 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 22:57:37.641757  158374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:57:37.659069  158374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
	I1222 22:57:37.759899  158374 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 22:57:37.763912  158374 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 22:57:37.763929  158374 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 22:57:37.763939  158374 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-72233/.minikube/addons for local assets ...
	I1222 22:57:37.763985  158374 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-72233/.minikube/files for local assets ...
	I1222 22:57:37.764057  158374 filesync.go:149] local asset: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem -> 758032.pem in /etc/ssl/certs
	I1222 22:57:37.764125  158374 filesync.go:149] local asset: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/test/nested/copy/75803/hosts -> hosts in /etc/test/nested/copy/75803
	I1222 22:57:37.764158  158374 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/75803
	I1222 22:57:37.772288  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem --> /etc/ssl/certs/758032.pem (1708 bytes)
	I1222 22:57:37.789657  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/test/nested/copy/75803/hosts --> /etc/test/nested/copy/75803/hosts (40 bytes)
	I1222 22:57:37.805946  158374 start.go:296] duration metric: took 164.267669ms for postStartSetup
	I1222 22:57:37.806019  158374 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 22:57:37.806054  158374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:57:37.823397  158374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
	I1222 22:57:37.920964  158374 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 22:57:37.925567  158374 fix.go:56] duration metric: took 1.493483875s for fixHost
	I1222 22:57:37.925585  158374 start.go:83] releasing machines lock for "functional-384766", held for 1.493518865s
	I1222 22:57:37.925676  158374 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-384766
	I1222 22:57:37.944340  158374 ssh_runner.go:195] Run: cat /version.json
	I1222 22:57:37.944379  158374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:57:37.944410  158374 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 22:57:37.944475  158374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:57:37.962270  158374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
	I1222 22:57:37.963480  158374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
	I1222 22:57:38.111745  158374 ssh_runner.go:195] Run: systemctl --version
	I1222 22:57:38.118245  158374 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 22:57:38.122628  158374 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 22:57:38.122679  158374 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 22:57:38.130349  158374 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1222 22:57:38.130362  158374 start.go:496] detecting cgroup driver to use...
	I1222 22:57:38.130390  158374 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 22:57:38.130482  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 22:57:38.143844  158374 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1222 22:57:38.152204  158374 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1222 22:57:38.160833  158374 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1222 22:57:38.160878  158374 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1222 22:57:38.168944  158374 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 22:57:38.176827  158374 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1222 22:57:38.185035  158374 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 22:57:38.193068  158374 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 22:57:38.200733  158374 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1222 22:57:38.208877  158374 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1222 22:57:38.217062  158374 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1222 22:57:38.225212  158374 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 22:57:38.231954  158374 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 22:57:38.238562  158374 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 22:57:38.319900  158374 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1222 22:57:38.394735  158374 start.go:496] detecting cgroup driver to use...
	I1222 22:57:38.394777  158374 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 22:57:38.394829  158374 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1222 22:57:38.408181  158374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 22:57:38.420724  158374 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1222 22:57:38.437862  158374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 22:57:38.450387  158374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1222 22:57:38.462197  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 22:57:38.475419  158374 ssh_runner.go:195] Run: which cri-dockerd
	I1222 22:57:38.478805  158374 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1222 22:57:38.485878  158374 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1222 22:57:38.497638  158374 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1222 22:57:38.579501  158374 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1222 22:57:38.662636  158374 docker.go:578] configuring docker to use "cgroupfs" as cgroup driver...
	I1222 22:57:38.662750  158374 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1222 22:57:38.675412  158374 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1222 22:57:38.686668  158374 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 22:57:38.767093  158374 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1222 22:57:39.452892  158374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 22:57:39.465276  158374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1222 22:57:39.477001  158374 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1222 22:57:39.491722  158374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1222 22:57:39.503501  158374 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1222 22:57:39.584904  158374 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1222 22:57:39.672762  158374 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 22:57:39.748726  158374 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1222 22:57:39.768653  158374 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1222 22:57:39.780104  158374 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 22:57:39.862790  158374 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1222 22:57:39.934384  158374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1222 22:57:39.948030  158374 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1222 22:57:39.948084  158374 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1222 22:57:39.952002  158374 start.go:564] Will wait 60s for crictl version
	I1222 22:57:39.952049  158374 ssh_runner.go:195] Run: which crictl
	I1222 22:57:39.955397  158374 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 22:57:39.979213  158374 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1222 22:57:39.979270  158374 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1222 22:57:40.004367  158374 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1222 22:57:40.031792  158374 out.go:252] * Preparing Kubernetes v1.35.0-rc.1 on Docker 29.1.3 ...
	I1222 22:57:40.031863  158374 cli_runner.go:164] Run: docker network inspect functional-384766 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 22:57:40.047933  158374 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1222 22:57:40.053698  158374 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1222 22:57:40.054726  158374 kubeadm.go:884] updating cluster {Name:functional-384766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-384766 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1222 22:57:40.054846  158374 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1222 22:57:40.054890  158374 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1222 22:57:40.076020  158374 docker.go:694] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-384766
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1222 22:57:40.076046  158374 docker.go:624] Images already preloaded, skipping extraction
	I1222 22:57:40.076111  158374 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1222 22:57:40.096347  158374 docker.go:694] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-384766
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1222 22:57:40.096366  158374 cache_images.go:86] Images are preloaded, skipping loading
	I1222 22:57:40.096374  158374 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 docker true true} ...
	I1222 22:57:40.096468  158374 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-384766 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-384766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 22:57:40.096517  158374 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1222 22:57:40.147179  158374 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1222 22:57:40.147206  158374 cni.go:84] Creating CNI manager for ""
	I1222 22:57:40.147226  158374 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1222 22:57:40.147236  158374 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 22:57:40.147256  158374 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-384766 NodeName:functional-384766 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 22:57:40.147375  158374 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-384766"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 22:57:40.147436  158374 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 22:57:40.155394  158374 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 22:57:40.155439  158374 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 22:57:40.163036  158374 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1222 22:57:40.175169  158374 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 22:57:40.187093  158374 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2073 bytes)
	I1222 22:57:40.198818  158374 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1222 22:57:40.202222  158374 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 22:57:40.283126  158374 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 22:57:40.747886  158374 certs.go:69] Setting up /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766 for IP: 192.168.49.2
	I1222 22:57:40.747899  158374 certs.go:195] generating shared ca certs ...
	I1222 22:57:40.747914  158374 certs.go:227] acquiring lock for ca certs: {Name:mk952cc8302daab7c0050aedd5db4002f6808128 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 22:57:40.748072  158374 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.key
	I1222 22:57:40.748113  158374 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.key
	I1222 22:57:40.748119  158374 certs.go:257] generating profile certs ...
	I1222 22:57:40.748199  158374 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.key
	I1222 22:57:40.748236  158374 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/apiserver.key.c9e079a8
	I1222 22:57:40.748278  158374 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/proxy-client.key
	I1222 22:57:40.748397  158374 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803.pem (1338 bytes)
	W1222 22:57:40.748423  158374 certs.go:480] ignoring /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803_empty.pem, impossibly tiny 0 bytes
	I1222 22:57:40.748429  158374 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem (1675 bytes)
	I1222 22:57:40.748451  158374 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem (1082 bytes)
	I1222 22:57:40.748470  158374 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem (1123 bytes)
	I1222 22:57:40.748489  158374 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem (1679 bytes)
	I1222 22:57:40.748525  158374 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem (1708 bytes)
	I1222 22:57:40.749053  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 22:57:40.768237  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 22:57:40.787559  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 22:57:40.804276  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 22:57:40.820613  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 22:57:40.836790  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 22:57:40.852839  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 22:57:40.869050  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1222 22:57:40.885231  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803.pem --> /usr/share/ca-certificates/75803.pem (1338 bytes)
	I1222 22:57:40.901347  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem --> /usr/share/ca-certificates/758032.pem (1708 bytes)
	I1222 22:57:40.917338  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 22:57:40.933332  158374 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 22:57:40.944903  158374 ssh_runner.go:195] Run: openssl version
	I1222 22:57:40.950515  158374 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/758032.pem
	I1222 22:57:40.957071  158374 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/758032.pem /etc/ssl/certs/758032.pem
	I1222 22:57:40.963749  158374 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/758032.pem
	I1222 22:57:40.966999  158374 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 22:42 /usr/share/ca-certificates/758032.pem
	I1222 22:57:40.967032  158374 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/758032.pem
	I1222 22:57:41.000342  158374 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 22:57:41.007579  158374 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 22:57:41.014450  158374 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 22:57:41.021442  158374 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 22:57:41.024853  158374 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 22:33 /usr/share/ca-certificates/minikubeCA.pem
	I1222 22:57:41.024902  158374 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 22:57:41.058138  158374 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 22:57:41.065135  158374 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/75803.pem
	I1222 22:57:41.071858  158374 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/75803.pem /etc/ssl/certs/75803.pem
	I1222 22:57:41.078672  158374 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75803.pem
	I1222 22:57:41.082051  158374 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 22:42 /usr/share/ca-certificates/75803.pem
	I1222 22:57:41.082083  158374 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75803.pem
	I1222 22:57:41.115012  158374 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 22:57:41.122326  158374 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 22:57:41.125872  158374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1222 22:57:41.158840  158374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1222 22:57:41.191689  158374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1222 22:57:41.224669  158374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1222 22:57:41.258802  158374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1222 22:57:41.292531  158374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1222 22:57:41.327828  158374 kubeadm.go:401] StartCluster: {Name:functional-384766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-384766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 22:57:41.327941  158374 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1222 22:57:41.347229  158374 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 22:57:41.355058  158374 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1222 22:57:41.355067  158374 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1222 22:57:41.355102  158374 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1222 22:57:41.362198  158374 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1222 22:57:41.362672  158374 kubeconfig.go:125] found "functional-384766" server: "https://192.168.49.2:8441"
	I1222 22:57:41.363809  158374 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1222 22:57:41.371022  158374 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-22 22:43:13.034628184 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-22 22:57:40.197478933 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1222 22:57:41.371029  158374 kubeadm.go:1161] stopping kube-system containers ...
	I1222 22:57:41.371066  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1222 22:57:41.389715  158374 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1222 22:57:41.415695  158374 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 22:57:41.423304  158374 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec 22 22:47 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 22 22:47 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 22 22:47 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 22 22:47 /etc/kubernetes/scheduler.conf
	
	I1222 22:57:41.423364  158374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1222 22:57:41.430717  158374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1222 22:57:41.437811  158374 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1222 22:57:41.437848  158374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 22:57:41.444879  158374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1222 22:57:41.452191  158374 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1222 22:57:41.452233  158374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 22:57:41.459225  158374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1222 22:57:41.466383  158374 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1222 22:57:41.466418  158374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 22:57:41.473427  158374 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 22:57:41.480724  158374 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1222 22:57:41.518575  158374 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1222 22:57:41.974225  158374 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1222 22:57:42.135961  158374 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1222 22:57:42.183844  158374 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1222 22:57:42.223254  158374 api_server.go:52] waiting for apiserver process to appear ...
	I1222 22:57:42.223318  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:42.723474  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:43.223549  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:43.724244  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:44.223849  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:44.724026  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:45.223499  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:45.723832  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:46.223744  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:46.723529  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:47.224208  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:47.723932  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:48.223584  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:48.724285  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:49.224200  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:49.723863  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:50.223734  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:50.724424  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:51.224429  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:51.724246  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:52.223705  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:52.724265  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:53.223569  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:53.724236  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:54.224306  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:54.724058  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:55.223766  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:55.724443  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:56.224475  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:56.724242  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:57.223801  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:57.723648  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:58.224456  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:58.724111  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:59.224088  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:59.723871  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:00.223787  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:00.723546  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:01.224166  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:01.723833  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:02.224349  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:02.724090  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:03.223629  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:03.724404  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:04.223783  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:04.724330  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:05.223503  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:05.724088  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:06.224003  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:06.724375  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:07.223508  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:07.724225  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:08.224384  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:08.724300  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:09.224073  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:09.723806  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:10.223613  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:10.724450  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:11.224438  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:11.724384  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:12.224307  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:12.723407  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:13.224265  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:13.724010  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:14.223745  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:14.723548  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:15.223894  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:15.723495  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:16.223471  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:16.724428  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:17.224173  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:17.723800  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:18.223536  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:18.724376  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:19.224018  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:19.723833  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:20.223461  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:20.723797  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:21.223581  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:21.723470  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:22.224221  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:22.723735  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:23.223505  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:23.723726  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:24.223516  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:24.724413  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:25.223835  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:25.723691  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:26.223672  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:26.723588  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:27.223568  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:27.723458  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:28.224226  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:28.724079  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:29.223830  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:29.723697  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:30.224456  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:30.724136  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:31.223750  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:31.723578  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:32.223414  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:32.724025  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:33.224291  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:33.724443  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:34.224315  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:34.724019  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:35.223687  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:35.723472  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:36.224212  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:36.724077  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:37.223750  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:37.723438  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:38.223515  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:38.723503  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:39.224337  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:39.723503  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:40.224133  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:40.723853  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:41.223668  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:41.723695  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:42.223527  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:58:42.242498  158374 logs.go:282] 0 containers: []
	W1222 22:58:42.242530  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:58:42.242576  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:58:42.263682  158374 logs.go:282] 0 containers: []
	W1222 22:58:42.263696  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:58:42.263747  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:58:42.284235  158374 logs.go:282] 0 containers: []
	W1222 22:58:42.284250  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:58:42.284330  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:58:42.303204  158374 logs.go:282] 0 containers: []
	W1222 22:58:42.303219  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:58:42.303263  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:58:42.321387  158374 logs.go:282] 0 containers: []
	W1222 22:58:42.321404  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:58:42.321461  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:58:42.340277  158374 logs.go:282] 0 containers: []
	W1222 22:58:42.340290  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:58:42.340333  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:58:42.359009  158374 logs.go:282] 0 containers: []
	W1222 22:58:42.359025  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:58:42.359034  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:58:42.359044  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:58:42.407304  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:58:42.407323  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:58:42.423167  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:58:42.423184  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:58:42.478018  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:58:42.471043   19939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:42.471587   19939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:42.473144   19939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:42.473549   19939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:42.474977   19939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:58:42.471043   19939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:42.471587   19939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:42.473144   19939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:42.473549   19939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:42.474977   19939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:58:42.478032  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:58:42.478050  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:58:42.508140  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:58:42.508159  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:58:45.047948  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:45.058851  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:58:45.078438  158374 logs.go:282] 0 containers: []
	W1222 22:58:45.078457  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:58:45.078506  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:58:45.096664  158374 logs.go:282] 0 containers: []
	W1222 22:58:45.096678  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:58:45.096729  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:58:45.114982  158374 logs.go:282] 0 containers: []
	W1222 22:58:45.114995  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:58:45.115033  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:58:45.132907  158374 logs.go:282] 0 containers: []
	W1222 22:58:45.132920  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:58:45.132960  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:58:45.151352  158374 logs.go:282] 0 containers: []
	W1222 22:58:45.151368  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:58:45.151409  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:58:45.169708  158374 logs.go:282] 0 containers: []
	W1222 22:58:45.169725  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:58:45.169767  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:58:45.187775  158374 logs.go:282] 0 containers: []
	W1222 22:58:45.187790  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:58:45.187802  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:58:45.187814  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:58:45.242776  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:58:45.235974   20088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:45.236524   20088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:45.238023   20088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:45.238438   20088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:45.239937   20088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:58:45.235974   20088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:45.236524   20088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:45.238023   20088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:45.238438   20088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:45.239937   20088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:58:45.242790  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:58:45.242800  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:58:45.273873  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:58:45.273892  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:58:45.303522  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:58:45.303541  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:58:45.351682  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:58:45.351702  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:58:47.869586  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:47.880760  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:58:47.899543  158374 logs.go:282] 0 containers: []
	W1222 22:58:47.899560  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:58:47.899617  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:58:47.917954  158374 logs.go:282] 0 containers: []
	W1222 22:58:47.917970  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:58:47.918017  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:58:47.936207  158374 logs.go:282] 0 containers: []
	W1222 22:58:47.936224  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:58:47.936269  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:58:47.954310  158374 logs.go:282] 0 containers: []
	W1222 22:58:47.954328  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:58:47.954376  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:58:47.971746  158374 logs.go:282] 0 containers: []
	W1222 22:58:47.971762  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:58:47.971806  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:58:47.989993  158374 logs.go:282] 0 containers: []
	W1222 22:58:47.990008  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:58:47.990054  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:58:48.008188  158374 logs.go:282] 0 containers: []
	W1222 22:58:48.008204  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:58:48.008215  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:58:48.008227  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:58:48.056174  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:58:48.056192  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:58:48.071128  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:58:48.071143  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:58:48.124584  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:58:48.117861   20249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:48.118352   20249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:48.119971   20249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:48.120348   20249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:48.121842   20249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:58:48.117861   20249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:48.118352   20249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:48.119971   20249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:48.120348   20249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:48.121842   20249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:58:48.124621  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:58:48.124635  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:58:48.155889  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:58:48.155907  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:58:50.685742  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:50.696961  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:58:50.716371  158374 logs.go:282] 0 containers: []
	W1222 22:58:50.716385  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:58:50.716430  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:58:50.734780  158374 logs.go:282] 0 containers: []
	W1222 22:58:50.734798  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:58:50.734842  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:58:50.753152  158374 logs.go:282] 0 containers: []
	W1222 22:58:50.753169  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:58:50.753213  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:58:50.771281  158374 logs.go:282] 0 containers: []
	W1222 22:58:50.771296  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:58:50.771338  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:58:50.788814  158374 logs.go:282] 0 containers: []
	W1222 22:58:50.788826  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:58:50.788872  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:58:50.806768  158374 logs.go:282] 0 containers: []
	W1222 22:58:50.806781  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:58:50.806837  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:58:50.824539  158374 logs.go:282] 0 containers: []
	W1222 22:58:50.824552  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:58:50.824561  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:58:50.824581  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:58:50.873346  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:58:50.873363  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:58:50.888174  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:58:50.888188  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:58:50.942890  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:58:50.935978   20403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:50.936526   20403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:50.938077   20403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:50.938499   20403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:50.939984   20403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:58:50.935978   20403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:50.936526   20403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:50.938077   20403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:50.938499   20403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:50.939984   20403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:58:50.942904  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:58:50.942915  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:58:50.971205  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:58:50.971223  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:58:53.500770  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:53.512538  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:58:53.535794  158374 logs.go:282] 0 containers: []
	W1222 22:58:53.535812  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:58:53.535872  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:58:53.554667  158374 logs.go:282] 0 containers: []
	W1222 22:58:53.554684  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:58:53.554739  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:58:53.573251  158374 logs.go:282] 0 containers: []
	W1222 22:58:53.573267  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:58:53.573317  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:58:53.591664  158374 logs.go:282] 0 containers: []
	W1222 22:58:53.591686  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:58:53.591739  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:58:53.610128  158374 logs.go:282] 0 containers: []
	W1222 22:58:53.610141  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:58:53.610183  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:58:53.628089  158374 logs.go:282] 0 containers: []
	W1222 22:58:53.628105  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:58:53.628148  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:58:53.645890  158374 logs.go:282] 0 containers: []
	W1222 22:58:53.645908  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:58:53.645919  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:58:53.645932  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:58:53.692043  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:58:53.692062  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:58:53.707092  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:58:53.707107  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:58:53.761308  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:58:53.754291   20560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:53.754841   20560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:53.756416   20560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:53.756807   20560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:53.758250   20560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:58:53.754291   20560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:53.754841   20560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:53.756416   20560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:53.756807   20560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:53.758250   20560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:58:53.761320  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:58:53.761331  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:58:53.789713  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:58:53.789730  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:58:56.318819  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:56.329790  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:58:56.348795  158374 logs.go:282] 0 containers: []
	W1222 22:58:56.348808  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:58:56.348851  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:58:56.366850  158374 logs.go:282] 0 containers: []
	W1222 22:58:56.366866  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:58:56.366932  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:58:56.385468  158374 logs.go:282] 0 containers: []
	W1222 22:58:56.385483  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:58:56.385530  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:58:56.404330  158374 logs.go:282] 0 containers: []
	W1222 22:58:56.404345  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:58:56.404406  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:58:56.422533  158374 logs.go:282] 0 containers: []
	W1222 22:58:56.422549  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:58:56.422631  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:58:56.440667  158374 logs.go:282] 0 containers: []
	W1222 22:58:56.440681  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:58:56.440742  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:58:56.459073  158374 logs.go:282] 0 containers: []
	W1222 22:58:56.459088  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:58:56.459099  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:58:56.459113  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:58:56.506766  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:58:56.506783  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:58:56.523645  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:58:56.523667  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:58:56.580517  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:58:56.573333   20718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:56.573875   20718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:56.575514   20718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:56.576017   20718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:56.577576   20718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:58:56.573333   20718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:56.573875   20718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:56.575514   20718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:56.576017   20718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:56.577576   20718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:58:56.580531  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:58:56.580543  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:58:56.610571  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:58:56.610588  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:58:59.140001  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:59.151059  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:58:59.169787  158374 logs.go:282] 0 containers: []
	W1222 22:58:59.169801  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:58:59.169840  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:58:59.187907  158374 logs.go:282] 0 containers: []
	W1222 22:58:59.187919  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:58:59.187959  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:58:59.206755  158374 logs.go:282] 0 containers: []
	W1222 22:58:59.206770  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:58:59.206811  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:58:59.225123  158374 logs.go:282] 0 containers: []
	W1222 22:58:59.225139  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:58:59.225179  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:58:59.243400  158374 logs.go:282] 0 containers: []
	W1222 22:58:59.243414  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:58:59.243453  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:58:59.261475  158374 logs.go:282] 0 containers: []
	W1222 22:58:59.261492  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:58:59.261556  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:58:59.279819  158374 logs.go:282] 0 containers: []
	W1222 22:58:59.279834  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:58:59.279844  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:58:59.279855  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:58:59.295024  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:58:59.295046  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:58:59.349874  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:58:59.343012   20862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:59.343571   20862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:59.345099   20862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:59.345548   20862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:59.347033   20862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:58:59.343012   20862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:59.343571   20862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:59.345099   20862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:59.345548   20862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:59.347033   20862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:58:59.349889  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:58:59.349902  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:58:59.381356  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:58:59.381378  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:58:59.409144  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:58:59.409160  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:01.955340  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:01.966166  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:01.985075  158374 logs.go:282] 0 containers: []
	W1222 22:59:01.985088  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:01.985135  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:02.003681  158374 logs.go:282] 0 containers: []
	W1222 22:59:02.003695  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:02.003748  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:02.022064  158374 logs.go:282] 0 containers: []
	W1222 22:59:02.022081  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:02.022127  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:02.040290  158374 logs.go:282] 0 containers: []
	W1222 22:59:02.040302  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:02.040346  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:02.058109  158374 logs.go:282] 0 containers: []
	W1222 22:59:02.058123  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:02.058167  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:02.076398  158374 logs.go:282] 0 containers: []
	W1222 22:59:02.076415  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:02.076469  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:02.095264  158374 logs.go:282] 0 containers: []
	W1222 22:59:02.095326  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:02.095338  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:02.095350  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:02.140655  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:02.140678  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:02.156234  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:02.156248  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:02.212079  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:02.205182   21023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:02.205716   21023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:02.207280   21023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:02.207782   21023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:02.209314   21023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:02.205182   21023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:02.205716   21023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:02.207280   21023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:02.207782   21023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:02.209314   21023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:02.212094  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:02.212106  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:02.241399  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:02.241415  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:04.771709  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:04.783605  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:04.802797  158374 logs.go:282] 0 containers: []
	W1222 22:59:04.802811  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:04.802907  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:04.822172  158374 logs.go:282] 0 containers: []
	W1222 22:59:04.822187  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:04.822232  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:04.840265  158374 logs.go:282] 0 containers: []
	W1222 22:59:04.840280  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:04.840320  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:04.858270  158374 logs.go:282] 0 containers: []
	W1222 22:59:04.858287  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:04.858329  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:04.876142  158374 logs.go:282] 0 containers: []
	W1222 22:59:04.876158  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:04.876204  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:04.894156  158374 logs.go:282] 0 containers: []
	W1222 22:59:04.894169  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:04.894209  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:04.912355  158374 logs.go:282] 0 containers: []
	W1222 22:59:04.912373  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:04.912383  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:04.912393  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:04.940312  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:04.940332  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:04.985353  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:04.985370  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:05.000242  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:05.000264  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:05.054276  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:05.047325   21197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:05.047883   21197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:05.049424   21197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:05.049887   21197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:05.051401   21197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:05.047325   21197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:05.047883   21197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:05.049424   21197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:05.049887   21197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:05.051401   21197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:05.054288  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:05.054298  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:07.583327  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:07.594487  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:07.614008  158374 logs.go:282] 0 containers: []
	W1222 22:59:07.614023  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:07.614073  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:07.633345  158374 logs.go:282] 0 containers: []
	W1222 22:59:07.633364  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:07.633410  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:07.651888  158374 logs.go:282] 0 containers: []
	W1222 22:59:07.651900  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:07.651939  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:07.670373  158374 logs.go:282] 0 containers: []
	W1222 22:59:07.670389  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:07.670431  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:07.687752  158374 logs.go:282] 0 containers: []
	W1222 22:59:07.687772  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:07.687819  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:07.707382  158374 logs.go:282] 0 containers: []
	W1222 22:59:07.707397  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:07.707449  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:07.725692  158374 logs.go:282] 0 containers: []
	W1222 22:59:07.725705  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:07.725714  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:07.725724  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:07.741276  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:07.741290  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:07.807688  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:07.800646   21327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:07.801223   21327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:07.802769   21327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:07.803177   21327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:07.804708   21327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:07.800646   21327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:07.801223   21327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:07.802769   21327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:07.803177   21327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:07.804708   21327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:07.807698  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:07.807708  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:07.838193  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:07.838211  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:07.867411  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:07.867429  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:10.417278  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:10.428172  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:10.447192  158374 logs.go:282] 0 containers: []
	W1222 22:59:10.447210  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:10.447268  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:10.465742  158374 logs.go:282] 0 containers: []
	W1222 22:59:10.465755  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:10.465802  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:10.483930  158374 logs.go:282] 0 containers: []
	W1222 22:59:10.483943  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:10.483982  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:10.502550  158374 logs.go:282] 0 containers: []
	W1222 22:59:10.502564  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:10.502631  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:10.521157  158374 logs.go:282] 0 containers: []
	W1222 22:59:10.521170  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:10.521217  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:10.539930  158374 logs.go:282] 0 containers: []
	W1222 22:59:10.539944  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:10.539988  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:10.557819  158374 logs.go:282] 0 containers: []
	W1222 22:59:10.557836  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:10.557847  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:10.557860  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:10.586007  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:10.586023  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:10.630906  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:10.630928  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:10.645969  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:10.645986  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:10.700369  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:10.693704   21501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:10.694182   21501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:10.695738   21501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:10.696170   21501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:10.697667   21501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:10.693704   21501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:10.694182   21501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:10.695738   21501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:10.696170   21501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:10.697667   21501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:10.700383  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:10.700396  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:13.229999  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:13.241245  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:13.261612  158374 logs.go:282] 0 containers: []
	W1222 22:59:13.261629  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:13.261685  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:13.279825  158374 logs.go:282] 0 containers: []
	W1222 22:59:13.279843  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:13.279893  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:13.297933  158374 logs.go:282] 0 containers: []
	W1222 22:59:13.297951  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:13.298008  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:13.316218  158374 logs.go:282] 0 containers: []
	W1222 22:59:13.316235  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:13.316315  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:13.334375  158374 logs.go:282] 0 containers: []
	W1222 22:59:13.334389  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:13.334444  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:13.353104  158374 logs.go:282] 0 containers: []
	W1222 22:59:13.353123  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:13.353179  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:13.371772  158374 logs.go:282] 0 containers: []
	W1222 22:59:13.371791  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:13.371802  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:13.371816  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:13.419777  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:13.419800  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:13.435473  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:13.435489  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:13.490824  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:13.484010   21640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:13.484560   21640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:13.486103   21640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:13.486541   21640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:13.488011   21640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:13.484010   21640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:13.484560   21640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:13.486103   21640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:13.486541   21640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:13.488011   21640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:13.490835  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:13.490848  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:13.519782  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:13.519800  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:16.052715  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:16.064085  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:16.083176  158374 logs.go:282] 0 containers: []
	W1222 22:59:16.083195  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:16.083255  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:16.102468  158374 logs.go:282] 0 containers: []
	W1222 22:59:16.102485  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:16.102532  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:16.121564  158374 logs.go:282] 0 containers: []
	W1222 22:59:16.121580  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:16.121654  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:16.140862  158374 logs.go:282] 0 containers: []
	W1222 22:59:16.140879  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:16.140928  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:16.159281  158374 logs.go:282] 0 containers: []
	W1222 22:59:16.159295  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:16.159347  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:16.177569  158374 logs.go:282] 0 containers: []
	W1222 22:59:16.177606  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:16.177659  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:16.196491  158374 logs.go:282] 0 containers: []
	W1222 22:59:16.196507  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:16.196516  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:16.196526  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:16.225379  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:16.225399  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:16.270312  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:16.270332  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:16.285737  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:16.285752  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:16.339892  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:16.332955   21812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:16.333507   21812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:16.335037   21812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:16.335481   21812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:16.337003   21812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:16.332955   21812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:16.333507   21812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:16.335037   21812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:16.335481   21812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:16.337003   21812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:16.339906  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:16.339924  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:18.870402  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:18.881333  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:18.899917  158374 logs.go:282] 0 containers: []
	W1222 22:59:18.899940  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:18.899987  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:18.918652  158374 logs.go:282] 0 containers: []
	W1222 22:59:18.918666  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:18.918711  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:18.936854  158374 logs.go:282] 0 containers: []
	W1222 22:59:18.936871  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:18.936930  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:18.956082  158374 logs.go:282] 0 containers: []
	W1222 22:59:18.956099  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:18.956148  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:18.974672  158374 logs.go:282] 0 containers: []
	W1222 22:59:18.974690  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:18.974747  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:18.993264  158374 logs.go:282] 0 containers: []
	W1222 22:59:18.993281  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:18.993330  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:19.013308  158374 logs.go:282] 0 containers: []
	W1222 22:59:19.013325  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:19.013335  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:19.013346  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:19.063311  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:19.063330  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:19.078990  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:19.079012  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:19.135746  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:19.127970   21956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:19.128563   21956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:19.130146   21956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:19.130556   21956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:19.132525   21956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:19.127970   21956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:19.128563   21956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:19.130146   21956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:19.130556   21956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:19.132525   21956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:19.135757  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:19.135778  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:19.165331  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:19.165348  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:21.694471  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:21.705412  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:21.724588  158374 logs.go:282] 0 containers: []
	W1222 22:59:21.724617  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:21.724663  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:21.744659  158374 logs.go:282] 0 containers: []
	W1222 22:59:21.744677  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:21.744732  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:21.762841  158374 logs.go:282] 0 containers: []
	W1222 22:59:21.762858  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:21.762913  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:21.782008  158374 logs.go:282] 0 containers: []
	W1222 22:59:21.782023  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:21.782064  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:21.801013  158374 logs.go:282] 0 containers: []
	W1222 22:59:21.801031  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:21.801077  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:21.817861  158374 logs.go:282] 0 containers: []
	W1222 22:59:21.817879  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:21.817936  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:21.836076  158374 logs.go:282] 0 containers: []
	W1222 22:59:21.836093  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:21.836104  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:21.836115  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:21.884827  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:21.884849  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:21.900053  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:21.900069  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:21.955238  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:21.947967   22101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:21.948481   22101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:21.950007   22101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:21.950444   22101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:21.952014   22101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:21.947967   22101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:21.948481   22101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:21.950007   22101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:21.950444   22101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:21.952014   22101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:21.955248  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:21.955258  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:21.984138  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:21.984157  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:24.515104  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:24.526883  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:24.546166  158374 logs.go:282] 0 containers: []
	W1222 22:59:24.546180  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:24.546228  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:24.565305  158374 logs.go:282] 0 containers: []
	W1222 22:59:24.565319  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:24.565361  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:24.584559  158374 logs.go:282] 0 containers: []
	W1222 22:59:24.584572  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:24.584631  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:24.604650  158374 logs.go:282] 0 containers: []
	W1222 22:59:24.604664  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:24.604712  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:24.623346  158374 logs.go:282] 0 containers: []
	W1222 22:59:24.623362  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:24.623412  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:24.642324  158374 logs.go:282] 0 containers: []
	W1222 22:59:24.642343  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:24.642406  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:24.661990  158374 logs.go:282] 0 containers: []
	W1222 22:59:24.662004  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:24.662013  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:24.662024  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:24.677840  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:24.677855  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:24.734271  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:24.726969   22258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:24.727498   22258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:24.729051   22258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:24.729473   22258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:24.731041   22258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:24.726969   22258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:24.727498   22258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:24.729051   22258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:24.729473   22258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:24.731041   22258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:24.734289  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:24.734304  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:24.764562  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:24.764580  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:24.793099  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:24.793115  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:27.340497  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:27.351904  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:27.372400  158374 logs.go:282] 0 containers: []
	W1222 22:59:27.372419  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:27.372472  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:27.392295  158374 logs.go:282] 0 containers: []
	W1222 22:59:27.392312  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:27.392363  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:27.411771  158374 logs.go:282] 0 containers: []
	W1222 22:59:27.411784  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:27.411828  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:27.430497  158374 logs.go:282] 0 containers: []
	W1222 22:59:27.430512  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:27.430558  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:27.449983  158374 logs.go:282] 0 containers: []
	W1222 22:59:27.449999  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:27.450044  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:27.469696  158374 logs.go:282] 0 containers: []
	W1222 22:59:27.469714  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:27.469771  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:27.488685  158374 logs.go:282] 0 containers: []
	W1222 22:59:27.488702  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:27.488715  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:27.488730  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:27.517546  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:27.517564  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:27.564530  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:27.564554  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:27.579944  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:27.579963  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:27.636369  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:27.629189   22431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:27.629734   22431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:27.631292   22431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:27.631750   22431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:27.633229   22431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:27.629189   22431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:27.629734   22431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:27.631292   22431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:27.631750   22431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:27.633229   22431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:27.636383  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:27.636394  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:30.168117  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:30.179633  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:30.199078  158374 logs.go:282] 0 containers: []
	W1222 22:59:30.199094  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:30.199144  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:30.218504  158374 logs.go:282] 0 containers: []
	W1222 22:59:30.218517  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:30.218559  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:30.237792  158374 logs.go:282] 0 containers: []
	W1222 22:59:30.237810  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:30.237858  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:30.257058  158374 logs.go:282] 0 containers: []
	W1222 22:59:30.257073  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:30.257118  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:30.277405  158374 logs.go:282] 0 containers: []
	W1222 22:59:30.277422  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:30.277475  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:30.297453  158374 logs.go:282] 0 containers: []
	W1222 22:59:30.297467  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:30.297515  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:30.316894  158374 logs.go:282] 0 containers: []
	W1222 22:59:30.316915  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:30.316924  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:30.316936  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:30.346684  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:30.346705  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:30.376362  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:30.376378  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:30.422918  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:30.422940  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:30.438917  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:30.438935  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:30.494621  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:30.487113   22590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:30.487790   22590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:30.489338   22590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:30.489779   22590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:30.491378   22590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:30.487113   22590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:30.487790   22590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:30.489338   22590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:30.489779   22590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:30.491378   22590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:32.995681  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:33.006896  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:33.026274  158374 logs.go:282] 0 containers: []
	W1222 22:59:33.026292  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:33.026336  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:33.045071  158374 logs.go:282] 0 containers: []
	W1222 22:59:33.045087  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:33.045134  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:33.064583  158374 logs.go:282] 0 containers: []
	W1222 22:59:33.064611  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:33.064660  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:33.085351  158374 logs.go:282] 0 containers: []
	W1222 22:59:33.085374  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:33.085431  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:33.103978  158374 logs.go:282] 0 containers: []
	W1222 22:59:33.103991  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:33.104045  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:33.123168  158374 logs.go:282] 0 containers: []
	W1222 22:59:33.123186  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:33.123241  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:33.143080  158374 logs.go:282] 0 containers: []
	W1222 22:59:33.143095  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:33.143105  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:33.143116  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:33.197825  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:33.190806   22712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:33.191305   22712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:33.192895   22712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:33.193360   22712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:33.194895   22712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:33.190806   22712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:33.191305   22712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:33.192895   22712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:33.193360   22712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:33.194895   22712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:33.197836  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:33.197850  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:33.226457  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:33.226476  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:33.257519  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:33.257546  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:33.309950  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:33.309971  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:35.827217  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:35.838617  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:35.858342  158374 logs.go:282] 0 containers: []
	W1222 22:59:35.858358  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:35.858412  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:35.877344  158374 logs.go:282] 0 containers: []
	W1222 22:59:35.877362  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:35.877416  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:35.897833  158374 logs.go:282] 0 containers: []
	W1222 22:59:35.897848  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:35.897902  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:35.916409  158374 logs.go:282] 0 containers: []
	W1222 22:59:35.916428  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:35.916485  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:35.935688  158374 logs.go:282] 0 containers: []
	W1222 22:59:35.935705  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:35.935766  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:35.954858  158374 logs.go:282] 0 containers: []
	W1222 22:59:35.954876  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:35.954924  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:35.973729  158374 logs.go:282] 0 containers: []
	W1222 22:59:35.973746  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:35.973757  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:35.973767  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:36.002045  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:36.002069  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:36.029933  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:36.029949  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:36.075963  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:36.075988  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:36.091711  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:36.091734  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:36.147521  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:36.140402   22889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:36.141008   22889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:36.142529   22889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:36.142962   22889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:36.144445   22889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:36.140402   22889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:36.141008   22889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:36.142529   22889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:36.142962   22889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:36.144445   22889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:38.649172  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:38.660310  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:38.679380  158374 logs.go:282] 0 containers: []
	W1222 22:59:38.679396  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:38.679449  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:38.698305  158374 logs.go:282] 0 containers: []
	W1222 22:59:38.698318  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:38.698365  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:38.717524  158374 logs.go:282] 0 containers: []
	W1222 22:59:38.717541  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:38.717601  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:38.736808  158374 logs.go:282] 0 containers: []
	W1222 22:59:38.736822  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:38.736874  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:38.756003  158374 logs.go:282] 0 containers: []
	W1222 22:59:38.756017  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:38.756061  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:38.774845  158374 logs.go:282] 0 containers: []
	W1222 22:59:38.774858  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:38.774901  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:38.793240  158374 logs.go:282] 0 containers: []
	W1222 22:59:38.793257  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:38.793269  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:38.793281  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:38.821390  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:38.821407  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:38.868649  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:38.868671  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:38.884729  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:38.884749  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:38.940189  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:38.933255   23042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:38.933828   23042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:38.935320   23042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:38.935747   23042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:38.937211   23042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:38.933255   23042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:38.933828   23042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:38.935320   23042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:38.935747   23042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:38.937211   23042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:38.940200  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:38.940211  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:41.470854  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:41.481957  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:41.501032  158374 logs.go:282] 0 containers: []
	W1222 22:59:41.501051  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:41.501102  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:41.522720  158374 logs.go:282] 0 containers: []
	W1222 22:59:41.522740  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:41.522799  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:41.544756  158374 logs.go:282] 0 containers: []
	W1222 22:59:41.544769  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:41.544812  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:41.564773  158374 logs.go:282] 0 containers: []
	W1222 22:59:41.564789  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:41.565312  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:41.586087  158374 logs.go:282] 0 containers: []
	W1222 22:59:41.586104  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:41.586156  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:41.604141  158374 logs.go:282] 0 containers: []
	W1222 22:59:41.604156  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:41.604206  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:41.623828  158374 logs.go:282] 0 containers: []
	W1222 22:59:41.623846  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:41.623858  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:41.623870  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:41.652778  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:41.652798  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:41.680995  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:41.681014  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:41.728777  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:41.728800  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:41.744897  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:41.744916  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:41.800644  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:41.793494   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:41.794046   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:41.795564   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:41.796035   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:41.797584   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:41.793494   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:41.794046   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:41.795564   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:41.796035   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:41.797584   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:44.302472  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:44.313688  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:44.333253  158374 logs.go:282] 0 containers: []
	W1222 22:59:44.333267  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:44.333313  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:44.352778  158374 logs.go:282] 0 containers: []
	W1222 22:59:44.352793  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:44.352851  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:44.372079  158374 logs.go:282] 0 containers: []
	W1222 22:59:44.372093  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:44.372135  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:44.390683  158374 logs.go:282] 0 containers: []
	W1222 22:59:44.390701  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:44.390761  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:44.409168  158374 logs.go:282] 0 containers: []
	W1222 22:59:44.409185  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:44.409259  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:44.426368  158374 logs.go:282] 0 containers: []
	W1222 22:59:44.426381  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:44.426426  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:44.444108  158374 logs.go:282] 0 containers: []
	W1222 22:59:44.444124  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:44.444138  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:44.444148  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:44.481663  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:44.481679  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:44.529101  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:44.529121  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:44.546062  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:44.546081  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:44.600660  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:44.593909   23362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:44.594437   23362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:44.595959   23362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:44.596345   23362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:44.597904   23362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:44.593909   23362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:44.594437   23362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:44.595959   23362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:44.596345   23362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:44.597904   23362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:44.600672  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:44.600684  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:47.129588  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:47.140641  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:47.159435  158374 logs.go:282] 0 containers: []
	W1222 22:59:47.159453  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:47.159498  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:47.178540  158374 logs.go:282] 0 containers: []
	W1222 22:59:47.178560  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:47.178634  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:47.198365  158374 logs.go:282] 0 containers: []
	W1222 22:59:47.198383  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:47.198438  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:47.217411  158374 logs.go:282] 0 containers: []
	W1222 22:59:47.217429  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:47.217479  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:47.236273  158374 logs.go:282] 0 containers: []
	W1222 22:59:47.236287  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:47.236330  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:47.255917  158374 logs.go:282] 0 containers: []
	W1222 22:59:47.255930  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:47.255973  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:47.274750  158374 logs.go:282] 0 containers: []
	W1222 22:59:47.274768  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:47.274779  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:47.274792  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:47.322428  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:47.322452  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:47.339666  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:47.339691  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:47.396552  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:47.389135   23492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:47.389730   23492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:47.391303   23492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:47.391736   23492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:47.393254   23492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:47.389135   23492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:47.389730   23492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:47.391303   23492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:47.391736   23492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:47.393254   23492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:47.396562  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:47.396574  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:47.425768  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:47.425785  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:49.955844  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:49.966834  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:49.985390  158374 logs.go:282] 0 containers: []
	W1222 22:59:49.985405  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:49.985446  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:50.003669  158374 logs.go:282] 0 containers: []
	W1222 22:59:50.003687  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:50.003735  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:50.023188  158374 logs.go:282] 0 containers: []
	W1222 22:59:50.023203  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:50.023254  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:50.042292  158374 logs.go:282] 0 containers: []
	W1222 22:59:50.042309  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:50.042360  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:50.060457  158374 logs.go:282] 0 containers: []
	W1222 22:59:50.060471  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:50.060516  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:50.078548  158374 logs.go:282] 0 containers: []
	W1222 22:59:50.078565  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:50.078666  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:50.096685  158374 logs.go:282] 0 containers: []
	W1222 22:59:50.096704  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:50.096717  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:50.096730  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:50.125658  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:50.125680  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:50.173107  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:50.173124  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:50.188136  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:50.188152  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:50.242225  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:50.235426   23662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:50.235931   23662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:50.237475   23662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:50.237960   23662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:50.239487   23662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:50.235426   23662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:50.235931   23662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:50.237475   23662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:50.237960   23662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:50.239487   23662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:50.242236  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:50.242246  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:52.771712  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:52.783330  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:52.802157  158374 logs.go:282] 0 containers: []
	W1222 22:59:52.802171  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:52.802219  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:52.820709  158374 logs.go:282] 0 containers: []
	W1222 22:59:52.820726  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:52.820777  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:52.839433  158374 logs.go:282] 0 containers: []
	W1222 22:59:52.839448  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:52.839515  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:52.857834  158374 logs.go:282] 0 containers: []
	W1222 22:59:52.857849  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:52.857903  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:52.875916  158374 logs.go:282] 0 containers: []
	W1222 22:59:52.875933  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:52.875977  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:52.893339  158374 logs.go:282] 0 containers: []
	W1222 22:59:52.893351  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:52.893394  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:52.911298  158374 logs.go:282] 0 containers: []
	W1222 22:59:52.911311  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:52.911319  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:52.911329  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:52.942377  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:52.942392  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:52.969572  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:52.969587  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:53.014323  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:53.014339  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:53.029751  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:53.029764  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:53.085527  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:53.078407   23822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:53.078987   23822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:53.080619   23822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:53.081049   23822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:53.082721   23822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:53.078407   23822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:53.078987   23822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:53.080619   23822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:53.081049   23822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:53.082721   23822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:55.587247  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:55.598436  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:55.617688  158374 logs.go:282] 0 containers: []
	W1222 22:59:55.617704  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:55.617764  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:55.637510  158374 logs.go:282] 0 containers: []
	W1222 22:59:55.637528  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:55.637585  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:55.656117  158374 logs.go:282] 0 containers: []
	W1222 22:59:55.656132  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:55.656187  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:55.675258  158374 logs.go:282] 0 containers: []
	W1222 22:59:55.675278  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:55.675327  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:55.694537  158374 logs.go:282] 0 containers: []
	W1222 22:59:55.694555  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:55.694627  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:55.711993  158374 logs.go:282] 0 containers: []
	W1222 22:59:55.712011  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:55.712056  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:55.730198  158374 logs.go:282] 0 containers: []
	W1222 22:59:55.730216  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:55.730228  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:55.730242  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:55.795390  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:55.795416  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:55.811790  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:55.811809  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:55.867201  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:55.859754   23959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:55.860414   23959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:55.862051   23959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:55.862474   23959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:55.863985   23959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:55.859754   23959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:55.860414   23959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:55.862051   23959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:55.862474   23959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:55.863985   23959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:55.867213  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:55.867224  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:55.898358  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:55.898381  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:58.428962  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:58.440024  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:58.459773  158374 logs.go:282] 0 containers: []
	W1222 22:59:58.459787  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:58.459828  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:58.478843  158374 logs.go:282] 0 containers: []
	W1222 22:59:58.478863  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:58.478920  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:58.498503  158374 logs.go:282] 0 containers: []
	W1222 22:59:58.498518  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:58.498563  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:58.518032  158374 logs.go:282] 0 containers: []
	W1222 22:59:58.518052  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:58.518110  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:58.537315  158374 logs.go:282] 0 containers: []
	W1222 22:59:58.537330  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:58.537388  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:58.556299  158374 logs.go:282] 0 containers: []
	W1222 22:59:58.556319  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:58.556368  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:58.575345  158374 logs.go:282] 0 containers: []
	W1222 22:59:58.575359  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:58.575369  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:58.575378  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:58.603490  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:58.603508  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:58.651589  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:58.651620  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:58.667341  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:58.667358  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:58.723840  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:58.716532   24120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:58.717054   24120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:58.718649   24120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:58.719098   24120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:58.720749   24120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:58.716532   24120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:58.717054   24120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:58.718649   24120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:58.719098   24120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:58.720749   24120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:58.723855  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:58.723865  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:01.257052  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:01.268153  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:01.287939  158374 logs.go:282] 0 containers: []
	W1222 23:00:01.287954  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:01.288001  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:01.306844  158374 logs.go:282] 0 containers: []
	W1222 23:00:01.306857  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:01.306904  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:01.326511  158374 logs.go:282] 0 containers: []
	W1222 23:00:01.326530  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:01.326579  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:01.345734  158374 logs.go:282] 0 containers: []
	W1222 23:00:01.345748  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:01.345793  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:01.364619  158374 logs.go:282] 0 containers: []
	W1222 23:00:01.364634  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:01.364682  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:01.383578  158374 logs.go:282] 0 containers: []
	W1222 23:00:01.383605  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:01.383654  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:01.401753  158374 logs.go:282] 0 containers: []
	W1222 23:00:01.401770  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:01.401781  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:01.401795  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:01.457583  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:01.450373   24259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:01.450906   24259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:01.452493   24259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:01.452946   24259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:01.454493   24259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:01.450373   24259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:01.450906   24259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:01.452493   24259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:01.452946   24259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:01.454493   24259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:01.457611  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:01.457625  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:01.486870  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:01.486891  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:01.514587  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:01.514619  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:01.561028  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:01.561052  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:04.078615  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:04.089843  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:04.109432  158374 logs.go:282] 0 containers: []
	W1222 23:00:04.109450  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:04.109498  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:04.128585  158374 logs.go:282] 0 containers: []
	W1222 23:00:04.128630  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:04.128680  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:04.147830  158374 logs.go:282] 0 containers: []
	W1222 23:00:04.147846  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:04.147901  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:04.166672  158374 logs.go:282] 0 containers: []
	W1222 23:00:04.166686  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:04.166730  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:04.185500  158374 logs.go:282] 0 containers: []
	W1222 23:00:04.185523  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:04.185574  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:04.204345  158374 logs.go:282] 0 containers: []
	W1222 23:00:04.204360  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:04.204404  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:04.222488  158374 logs.go:282] 0 containers: []
	W1222 23:00:04.222503  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:04.222513  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:04.222523  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:04.252225  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:04.252244  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:04.280489  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:04.280507  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:04.329635  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:04.329657  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:04.345631  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:04.345650  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:04.400851  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:04.393656   24445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:04.394230   24445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:04.395887   24445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:04.396281   24445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:04.397838   24445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:04.393656   24445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:04.394230   24445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:04.395887   24445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:04.396281   24445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:04.397838   24445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:06.901498  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:06.913084  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:06.932724  158374 logs.go:282] 0 containers: []
	W1222 23:00:06.932739  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:06.932793  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:06.951127  158374 logs.go:282] 0 containers: []
	W1222 23:00:06.951146  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:06.951187  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:06.969488  158374 logs.go:282] 0 containers: []
	W1222 23:00:06.969501  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:06.969543  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:06.987763  158374 logs.go:282] 0 containers: []
	W1222 23:00:06.987780  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:06.987824  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:07.005884  158374 logs.go:282] 0 containers: []
	W1222 23:00:07.005900  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:07.005951  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:07.026370  158374 logs.go:282] 0 containers: []
	W1222 23:00:07.026397  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:07.026449  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:07.047472  158374 logs.go:282] 0 containers: []
	W1222 23:00:07.047486  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:07.047496  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:07.047505  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:07.092662  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:07.092679  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:07.107657  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:07.107672  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:07.162182  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:07.155104   24590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:07.155706   24590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:07.157237   24590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:07.157737   24590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:07.159269   24590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:07.155104   24590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:07.155706   24590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:07.157237   24590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:07.157737   24590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:07.159269   24590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:07.162193  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:07.162203  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:07.190466  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:07.190482  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:09.719767  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:09.730961  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:09.750004  158374 logs.go:282] 0 containers: []
	W1222 23:00:09.750021  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:09.750061  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:09.768191  158374 logs.go:282] 0 containers: []
	W1222 23:00:09.768203  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:09.768240  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:09.785655  158374 logs.go:282] 0 containers: []
	W1222 23:00:09.785668  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:09.785715  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:09.803931  158374 logs.go:282] 0 containers: []
	W1222 23:00:09.803946  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:09.803987  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:09.823040  158374 logs.go:282] 0 containers: []
	W1222 23:00:09.823058  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:09.823105  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:09.841359  158374 logs.go:282] 0 containers: []
	W1222 23:00:09.841373  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:09.841413  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:09.859786  158374 logs.go:282] 0 containers: []
	W1222 23:00:09.859799  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:09.859812  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:09.859824  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:09.905428  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:09.905445  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:09.920496  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:09.920511  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:09.974948  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:09.968124   24736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:09.968667   24736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:09.970182   24736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:09.970583   24736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:09.972119   24736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:09.968124   24736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:09.968667   24736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:09.970182   24736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:09.970583   24736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:09.972119   24736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:09.974969  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:09.974982  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:10.003466  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:10.003485  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:12.535644  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:12.546867  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:12.565761  158374 logs.go:282] 0 containers: []
	W1222 23:00:12.565778  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:12.565825  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:12.584431  158374 logs.go:282] 0 containers: []
	W1222 23:00:12.584446  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:12.584504  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:12.602950  158374 logs.go:282] 0 containers: []
	W1222 23:00:12.602966  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:12.603009  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:12.621210  158374 logs.go:282] 0 containers: []
	W1222 23:00:12.621224  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:12.621268  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:12.639377  158374 logs.go:282] 0 containers: []
	W1222 23:00:12.639393  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:12.639444  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:12.657924  158374 logs.go:282] 0 containers: []
	W1222 23:00:12.657941  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:12.657984  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:12.676311  158374 logs.go:282] 0 containers: []
	W1222 23:00:12.676326  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:12.676336  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:12.676346  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:12.703500  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:12.703515  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:12.750933  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:12.750951  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:12.766856  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:12.766870  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:12.822138  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:12.815289   24904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:12.815808   24904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:12.817339   24904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:12.817801   24904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:12.819283   24904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:12.815289   24904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:12.815808   24904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:12.817339   24904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:12.817801   24904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:12.819283   24904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:12.822170  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:12.822269  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:15.355685  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:15.366722  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:15.385319  158374 logs.go:282] 0 containers: []
	W1222 23:00:15.385334  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:15.385401  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:15.402653  158374 logs.go:282] 0 containers: []
	W1222 23:00:15.402666  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:15.402712  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:15.420695  158374 logs.go:282] 0 containers: []
	W1222 23:00:15.420709  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:15.420757  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:15.438422  158374 logs.go:282] 0 containers: []
	W1222 23:00:15.438438  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:15.438488  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:15.457961  158374 logs.go:282] 0 containers: []
	W1222 23:00:15.457978  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:15.458023  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:15.477016  158374 logs.go:282] 0 containers: []
	W1222 23:00:15.477031  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:15.477075  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:15.495320  158374 logs.go:282] 0 containers: []
	W1222 23:00:15.495335  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:15.495346  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:15.495363  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:15.542697  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:15.542716  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:15.557986  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:15.558002  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:15.613071  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:15.605742   25048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:15.606273   25048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:15.607863   25048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:15.608322   25048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:15.609898   25048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:15.605742   25048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:15.606273   25048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:15.607863   25048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:15.608322   25048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:15.609898   25048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:15.613082  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:15.613093  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:15.643893  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:15.643912  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:18.176478  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:18.187435  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:18.206820  158374 logs.go:282] 0 containers: []
	W1222 23:00:18.206836  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:18.206885  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:18.225162  158374 logs.go:282] 0 containers: []
	W1222 23:00:18.225179  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:18.225242  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:18.244089  158374 logs.go:282] 0 containers: []
	W1222 23:00:18.244106  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:18.244149  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:18.263582  158374 logs.go:282] 0 containers: []
	W1222 23:00:18.263618  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:18.263678  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:18.285421  158374 logs.go:282] 0 containers: []
	W1222 23:00:18.285439  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:18.285483  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:18.304575  158374 logs.go:282] 0 containers: []
	W1222 23:00:18.304616  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:18.304679  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:18.322814  158374 logs.go:282] 0 containers: []
	W1222 23:00:18.322831  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:18.322842  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:18.322853  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:18.367678  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:18.367695  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:18.384038  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:18.384060  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:18.439158  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:18.432401   25210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:18.432947   25210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:18.434455   25210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:18.434840   25210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:18.436266   25210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:18.432401   25210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:18.432947   25210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:18.434455   25210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:18.434840   25210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:18.436266   25210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:18.439172  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:18.439186  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:18.468274  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:18.468290  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:20.996786  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:21.007676  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:21.026577  158374 logs.go:282] 0 containers: []
	W1222 23:00:21.026589  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:21.026662  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:21.045179  158374 logs.go:282] 0 containers: []
	W1222 23:00:21.045195  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:21.045237  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:21.064216  158374 logs.go:282] 0 containers: []
	W1222 23:00:21.064230  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:21.064278  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:21.082929  158374 logs.go:282] 0 containers: []
	W1222 23:00:21.082946  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:21.082991  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:21.101298  158374 logs.go:282] 0 containers: []
	W1222 23:00:21.101314  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:21.101372  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:21.119708  158374 logs.go:282] 0 containers: []
	W1222 23:00:21.119719  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:21.119759  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:21.137828  158374 logs.go:282] 0 containers: []
	W1222 23:00:21.137841  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:21.137849  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:21.137859  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:21.167198  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:21.167214  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:21.194956  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:21.194974  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:21.243666  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:21.243687  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:21.259092  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:21.259108  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:21.316128  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:21.309163   25383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:21.309711   25383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:21.311266   25383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:21.311721   25383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:21.313183   25383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:21.309163   25383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:21.309711   25383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:21.311266   25383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:21.311721   25383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:21.313183   25383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:23.817830  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:23.829010  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:23.847819  158374 logs.go:282] 0 containers: []
	W1222 23:00:23.847833  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:23.847883  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:23.866626  158374 logs.go:282] 0 containers: []
	W1222 23:00:23.866640  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:23.866685  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:23.884038  158374 logs.go:282] 0 containers: []
	W1222 23:00:23.884053  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:23.884099  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:23.903021  158374 logs.go:282] 0 containers: []
	W1222 23:00:23.903037  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:23.903091  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:23.921758  158374 logs.go:282] 0 containers: []
	W1222 23:00:23.921771  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:23.921817  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:23.940118  158374 logs.go:282] 0 containers: []
	W1222 23:00:23.940135  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:23.940176  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:23.958805  158374 logs.go:282] 0 containers: []
	W1222 23:00:23.958817  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:23.958826  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:23.958836  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:24.006524  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:24.006542  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:24.021579  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:24.021602  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:24.077965  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:24.070852   25512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:24.071400   25512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:24.072995   25512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:24.073395   25512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:24.074970   25512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:24.070852   25512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:24.071400   25512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:24.072995   25512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:24.073395   25512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:24.074970   25512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:24.077976  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:24.077986  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:24.107448  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:24.107464  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:26.635419  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:26.646546  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:26.665787  158374 logs.go:282] 0 containers: []
	W1222 23:00:26.665805  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:26.665856  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:26.683869  158374 logs.go:282] 0 containers: []
	W1222 23:00:26.683885  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:26.683930  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:26.702549  158374 logs.go:282] 0 containers: []
	W1222 23:00:26.702565  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:26.702628  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:26.720884  158374 logs.go:282] 0 containers: []
	W1222 23:00:26.720901  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:26.720947  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:26.739437  158374 logs.go:282] 0 containers: []
	W1222 23:00:26.739453  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:26.739498  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:26.757871  158374 logs.go:282] 0 containers: []
	W1222 23:00:26.757885  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:26.757927  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:26.775863  158374 logs.go:282] 0 containers: []
	W1222 23:00:26.775882  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:26.775893  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:26.775902  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:26.821886  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:26.821903  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:26.837204  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:26.837220  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:26.891970  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:26.884764   25667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:26.885231   25667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:26.886895   25667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:26.887281   25667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:26.888844   25667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:26.884764   25667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:26.885231   25667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:26.886895   25667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:26.887281   25667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:26.888844   25667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:26.891981  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:26.891991  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:26.922932  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:26.922949  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:29.452400  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:29.463551  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:29.482265  158374 logs.go:282] 0 containers: []
	W1222 23:00:29.482278  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:29.482326  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:29.501689  158374 logs.go:282] 0 containers: []
	W1222 23:00:29.501707  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:29.501762  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:29.522730  158374 logs.go:282] 0 containers: []
	W1222 23:00:29.522747  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:29.522799  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:29.542657  158374 logs.go:282] 0 containers: []
	W1222 23:00:29.542671  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:29.542720  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:29.560883  158374 logs.go:282] 0 containers: []
	W1222 23:00:29.560897  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:29.560938  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:29.579281  158374 logs.go:282] 0 containers: []
	W1222 23:00:29.579297  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:29.579340  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:29.597740  158374 logs.go:282] 0 containers: []
	W1222 23:00:29.597755  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:29.597766  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:29.597777  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:29.627231  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:29.627248  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:29.655168  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:29.655183  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:29.703330  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:29.703348  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:29.718800  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:29.718821  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:29.773515  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:29.766735   25841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:29.767220   25841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:29.768799   25841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:29.769197   25841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:29.770715   25841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:29.766735   25841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:29.767220   25841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:29.768799   25841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:29.769197   25841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:29.770715   25841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:32.274411  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:32.285356  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:32.304409  158374 logs.go:282] 0 containers: []
	W1222 23:00:32.304423  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:32.304465  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:32.324167  158374 logs.go:282] 0 containers: []
	W1222 23:00:32.324183  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:32.324228  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:32.342878  158374 logs.go:282] 0 containers: []
	W1222 23:00:32.342893  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:32.342950  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:32.362212  158374 logs.go:282] 0 containers: []
	W1222 23:00:32.362226  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:32.362268  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:32.381154  158374 logs.go:282] 0 containers: []
	W1222 23:00:32.381171  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:32.381229  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:32.400512  158374 logs.go:282] 0 containers: []
	W1222 23:00:32.400533  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:32.400587  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:32.419083  158374 logs.go:282] 0 containers: []
	W1222 23:00:32.419097  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:32.419112  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:32.419121  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:32.466805  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:32.466824  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:32.482931  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:32.482947  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:32.543407  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:32.536205   25971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:32.536750   25971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:32.538320   25971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:32.538796   25971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:32.540418   25971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:32.536205   25971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:32.536750   25971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:32.538320   25971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:32.538796   25971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:32.540418   25971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:32.543419  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:32.543436  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:32.572975  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:32.572990  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:35.102503  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:35.113391  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:35.132097  158374 logs.go:282] 0 containers: []
	W1222 23:00:35.132109  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:35.132151  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:35.150359  158374 logs.go:282] 0 containers: []
	W1222 23:00:35.150378  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:35.150441  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:35.169070  158374 logs.go:282] 0 containers: []
	W1222 23:00:35.169088  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:35.169141  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:35.187626  158374 logs.go:282] 0 containers: []
	W1222 23:00:35.187641  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:35.187686  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:35.205837  158374 logs.go:282] 0 containers: []
	W1222 23:00:35.205854  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:35.205895  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:35.224198  158374 logs.go:282] 0 containers: []
	W1222 23:00:35.224213  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:35.224255  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:35.241750  158374 logs.go:282] 0 containers: []
	W1222 23:00:35.241765  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:35.241774  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:35.241783  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:35.286130  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:35.286145  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:35.301096  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:35.301111  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:35.356973  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:35.349818   26124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:35.350427   26124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:35.351993   26124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:35.352503   26124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:35.353993   26124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:35.349818   26124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:35.350427   26124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:35.351993   26124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:35.352503   26124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:35.353993   26124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:35.356986  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:35.356997  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:35.385504  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:35.385523  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:37.914534  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:37.925443  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:37.943928  158374 logs.go:282] 0 containers: []
	W1222 23:00:37.943942  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:37.943990  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:37.961408  158374 logs.go:282] 0 containers: []
	W1222 23:00:37.961424  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:37.961481  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:37.979364  158374 logs.go:282] 0 containers: []
	W1222 23:00:37.979380  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:37.979437  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:37.997737  158374 logs.go:282] 0 containers: []
	W1222 23:00:37.997751  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:37.997796  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:38.016341  158374 logs.go:282] 0 containers: []
	W1222 23:00:38.016358  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:38.016425  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:38.035203  158374 logs.go:282] 0 containers: []
	W1222 23:00:38.035221  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:38.035270  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:38.053655  158374 logs.go:282] 0 containers: []
	W1222 23:00:38.053672  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:38.053684  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:38.053699  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:38.100003  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:38.100022  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:38.116642  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:38.116661  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:38.170897  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:38.164046   26282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:38.164628   26282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:38.166176   26282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:38.166562   26282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:38.168042   26282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:38.164046   26282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:38.164628   26282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:38.166176   26282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:38.166562   26282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:38.168042   26282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:38.170907  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:38.170921  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:38.200254  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:38.200273  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:40.729556  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:40.740911  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:40.762071  158374 logs.go:282] 0 containers: []
	W1222 23:00:40.762085  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:40.762131  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:40.782191  158374 logs.go:282] 0 containers: []
	W1222 23:00:40.782207  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:40.782259  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:40.802303  158374 logs.go:282] 0 containers: []
	W1222 23:00:40.802318  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:40.802365  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:40.821101  158374 logs.go:282] 0 containers: []
	W1222 23:00:40.821115  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:40.821159  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:40.839813  158374 logs.go:282] 0 containers: []
	W1222 23:00:40.839830  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:40.839880  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:40.859473  158374 logs.go:282] 0 containers: []
	W1222 23:00:40.859490  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:40.859546  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:40.877060  158374 logs.go:282] 0 containers: []
	W1222 23:00:40.877076  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:40.877088  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:40.877101  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:40.922835  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:40.922852  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:40.938346  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:40.938361  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:40.993518  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:40.986693   26438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:40.987159   26438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:40.988687   26438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:40.989134   26438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:40.990683   26438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:40.986693   26438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:40.987159   26438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:40.988687   26438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:40.989134   26438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:40.990683   26438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:40.993531  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:40.993542  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:41.023093  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:41.023109  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:43.552889  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:43.564108  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:43.582897  158374 logs.go:282] 0 containers: []
	W1222 23:00:43.582914  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:43.582969  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:43.601736  158374 logs.go:282] 0 containers: []
	W1222 23:00:43.601750  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:43.601793  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:43.620069  158374 logs.go:282] 0 containers: []
	W1222 23:00:43.620083  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:43.620126  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:43.638251  158374 logs.go:282] 0 containers: []
	W1222 23:00:43.638269  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:43.638335  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:43.656816  158374 logs.go:282] 0 containers: []
	W1222 23:00:43.656829  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:43.656878  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:43.675289  158374 logs.go:282] 0 containers: []
	W1222 23:00:43.675302  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:43.675354  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:43.693789  158374 logs.go:282] 0 containers: []
	W1222 23:00:43.693805  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:43.693817  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:43.693828  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:43.749062  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:43.742036   26577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:43.742643   26577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:43.744184   26577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:43.744673   26577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:43.746198   26577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:43.742036   26577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:43.742643   26577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:43.744184   26577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:43.744673   26577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:43.746198   26577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:43.749078  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:43.749091  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:43.779321  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:43.779346  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:43.808611  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:43.808632  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:43.853216  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:43.853235  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:46.369073  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:46.380061  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:46.398781  158374 logs.go:282] 0 containers: []
	W1222 23:00:46.398797  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:46.398851  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:46.416817  158374 logs.go:282] 0 containers: []
	W1222 23:00:46.416834  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:46.416877  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:46.434863  158374 logs.go:282] 0 containers: []
	W1222 23:00:46.434877  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:46.434923  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:46.453147  158374 logs.go:282] 0 containers: []
	W1222 23:00:46.453164  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:46.453208  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:46.471210  158374 logs.go:282] 0 containers: []
	W1222 23:00:46.471224  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:46.471272  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:46.489455  158374 logs.go:282] 0 containers: []
	W1222 23:00:46.489468  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:46.489517  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:46.508022  158374 logs.go:282] 0 containers: []
	W1222 23:00:46.508039  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:46.508050  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:46.508061  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:46.555488  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:46.555505  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:46.571399  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:46.571415  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:46.626809  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:46.620219   26740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:46.620751   26740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:46.622257   26740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:46.622660   26740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:46.624126   26740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:46.620219   26740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:46.620751   26740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:46.622257   26740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:46.622660   26740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:46.624126   26740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:46.626822  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:46.626834  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:46.656631  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:46.656648  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:49.197674  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:49.208617  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:49.228203  158374 logs.go:282] 0 containers: []
	W1222 23:00:49.228221  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:49.228267  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:49.246623  158374 logs.go:282] 0 containers: []
	W1222 23:00:49.246638  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:49.246676  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:49.264794  158374 logs.go:282] 0 containers: []
	W1222 23:00:49.264810  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:49.264861  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:49.283414  158374 logs.go:282] 0 containers: []
	W1222 23:00:49.283431  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:49.283480  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:49.301735  158374 logs.go:282] 0 containers: []
	W1222 23:00:49.301748  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:49.301787  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:49.320079  158374 logs.go:282] 0 containers: []
	W1222 23:00:49.320092  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:49.320134  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:49.339280  158374 logs.go:282] 0 containers: []
	W1222 23:00:49.339296  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:49.339308  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:49.339324  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:49.354937  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:49.354953  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:49.409733  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:49.402775   26901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:49.403250   26901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:49.404843   26901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:49.405306   26901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:49.406844   26901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:49.402775   26901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:49.403250   26901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:49.404843   26901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:49.405306   26901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:49.406844   26901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:49.409744  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:49.409753  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:49.438748  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:49.438765  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:49.466948  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:49.466964  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:52.015822  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:52.027799  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:52.047753  158374 logs.go:282] 0 containers: []
	W1222 23:00:52.047770  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:52.047825  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:52.067486  158374 logs.go:282] 0 containers: []
	W1222 23:00:52.067502  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:52.067557  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:52.086094  158374 logs.go:282] 0 containers: []
	W1222 23:00:52.086110  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:52.086158  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:52.105497  158374 logs.go:282] 0 containers: []
	W1222 23:00:52.105513  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:52.105560  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:52.123560  158374 logs.go:282] 0 containers: []
	W1222 23:00:52.123575  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:52.123641  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:52.141887  158374 logs.go:282] 0 containers: []
	W1222 23:00:52.141905  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:52.141957  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:52.160464  158374 logs.go:282] 0 containers: []
	W1222 23:00:52.160480  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:52.160491  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:52.160500  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:52.207605  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:52.207623  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:52.222700  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:52.222714  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:52.277875  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:52.270341   27059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:52.270915   27059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:52.272985   27059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:52.273548   27059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:52.275027   27059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:52.270341   27059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:52.270915   27059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:52.272985   27059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:52.273548   27059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:52.275027   27059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:52.277887  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:52.277899  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:52.307146  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:52.307163  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:54.836681  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:54.847631  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:54.866945  158374 logs.go:282] 0 containers: []
	W1222 23:00:54.866961  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:54.867018  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:54.885478  158374 logs.go:282] 0 containers: []
	W1222 23:00:54.885491  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:54.885540  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:54.904643  158374 logs.go:282] 0 containers: []
	W1222 23:00:54.904657  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:54.904701  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:54.923539  158374 logs.go:282] 0 containers: []
	W1222 23:00:54.923554  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:54.923615  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:54.941322  158374 logs.go:282] 0 containers: []
	W1222 23:00:54.941338  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:54.941399  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:54.961766  158374 logs.go:282] 0 containers: []
	W1222 23:00:54.961785  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:54.961839  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:54.981417  158374 logs.go:282] 0 containers: []
	W1222 23:00:54.981432  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:54.981442  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:54.981452  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:55.012286  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:55.012306  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:55.043682  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:55.043703  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:55.091444  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:55.091466  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:55.107015  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:55.107039  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:55.162701  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:55.155754   27231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:55.156265   27231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:55.157880   27231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:55.158349   27231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:55.159815   27231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:55.155754   27231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:55.156265   27231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:55.157880   27231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:55.158349   27231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:55.159815   27231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:57.664308  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:57.675489  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:57.692858  158374 logs.go:282] 0 containers: []
	W1222 23:00:57.692875  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:57.692935  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:57.712522  158374 logs.go:282] 0 containers: []
	W1222 23:00:57.712538  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:57.712607  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:57.732210  158374 logs.go:282] 0 containers: []
	W1222 23:00:57.732226  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:57.732269  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:57.751532  158374 logs.go:282] 0 containers: []
	W1222 23:00:57.751545  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:57.751602  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:57.772243  158374 logs.go:282] 0 containers: []
	W1222 23:00:57.772257  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:57.772301  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:57.791227  158374 logs.go:282] 0 containers: []
	W1222 23:00:57.791243  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:57.791304  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:57.810525  158374 logs.go:282] 0 containers: []
	W1222 23:00:57.810543  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:57.810552  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:57.810561  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:57.858495  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:57.858513  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:57.873762  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:57.873777  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:57.929650  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:57.922350   27355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:57.922945   27355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:57.924490   27355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:57.924968   27355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:57.926535   27355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:57.922350   27355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:57.922945   27355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:57.924490   27355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:57.924968   27355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:57.926535   27355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:57.929662  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:57.929672  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:57.960293  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:57.960310  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:00.491408  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:00.502843  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:00.522074  158374 logs.go:282] 0 containers: []
	W1222 23:01:00.522090  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:00.522138  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:00.540871  158374 logs.go:282] 0 containers: []
	W1222 23:01:00.540888  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:00.540945  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:00.558913  158374 logs.go:282] 0 containers: []
	W1222 23:01:00.558931  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:00.558975  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:00.577980  158374 logs.go:282] 0 containers: []
	W1222 23:01:00.577997  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:00.578050  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:00.597037  158374 logs.go:282] 0 containers: []
	W1222 23:01:00.597056  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:00.597104  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:00.615867  158374 logs.go:282] 0 containers: []
	W1222 23:01:00.615881  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:00.615924  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:00.634566  158374 logs.go:282] 0 containers: []
	W1222 23:01:00.634581  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:00.634609  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:00.634623  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:00.663403  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:00.663425  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:00.712341  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:00.712364  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:00.729099  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:00.729121  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:00.785283  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:00.778009   27527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:00.778541   27527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:00.780082   27527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:00.780546   27527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:00.782086   27527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:00.778009   27527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:00.778541   27527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:00.780082   27527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:00.780546   27527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:00.782086   27527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:00.785294  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:00.785307  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:03.318166  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:03.329041  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:03.347864  158374 logs.go:282] 0 containers: []
	W1222 23:01:03.347878  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:03.347940  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:03.366921  158374 logs.go:282] 0 containers: []
	W1222 23:01:03.366937  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:03.366991  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:03.384054  158374 logs.go:282] 0 containers: []
	W1222 23:01:03.384070  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:03.384117  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:03.403365  158374 logs.go:282] 0 containers: []
	W1222 23:01:03.403380  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:03.403432  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:03.422487  158374 logs.go:282] 0 containers: []
	W1222 23:01:03.422501  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:03.422556  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:03.440736  158374 logs.go:282] 0 containers: []
	W1222 23:01:03.440751  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:03.440805  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:03.459891  158374 logs.go:282] 0 containers: []
	W1222 23:01:03.459906  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:03.459915  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:03.459926  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:03.508624  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:03.508646  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:03.525926  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:03.525946  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:03.581788  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:03.574395   27664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:03.574955   27664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:03.576542   27664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:03.577046   27664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:03.578534   27664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:03.574395   27664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:03.574955   27664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:03.576542   27664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:03.577046   27664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:03.578534   27664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:03.581798  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:03.581809  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:03.610547  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:03.610564  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:06.140960  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:06.152054  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:06.171277  158374 logs.go:282] 0 containers: []
	W1222 23:01:06.171290  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:06.171346  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:06.190005  158374 logs.go:282] 0 containers: []
	W1222 23:01:06.190024  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:06.190073  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:06.209033  158374 logs.go:282] 0 containers: []
	W1222 23:01:06.209052  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:06.209119  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:06.228351  158374 logs.go:282] 0 containers: []
	W1222 23:01:06.228368  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:06.228424  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:06.248668  158374 logs.go:282] 0 containers: []
	W1222 23:01:06.248682  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:06.248737  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:06.269569  158374 logs.go:282] 0 containers: []
	W1222 23:01:06.269587  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:06.269662  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:06.291826  158374 logs.go:282] 0 containers: []
	W1222 23:01:06.291841  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:06.291851  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:06.291861  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:06.337818  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:06.337839  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:06.353395  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:06.353413  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:06.410648  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:06.403406   27819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:06.403979   27819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:06.405629   27819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:06.406063   27819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:06.407632   27819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:06.403406   27819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:06.403979   27819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:06.405629   27819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:06.406063   27819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:06.407632   27819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:06.410666  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:06.410681  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:06.440427  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:06.440447  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:08.971014  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:08.982172  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:09.001296  158374 logs.go:282] 0 containers: []
	W1222 23:01:09.001315  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:09.001377  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:09.021010  158374 logs.go:282] 0 containers: []
	W1222 23:01:09.021025  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:09.021065  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:09.039299  158374 logs.go:282] 0 containers: []
	W1222 23:01:09.039315  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:09.039361  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:09.059055  158374 logs.go:282] 0 containers: []
	W1222 23:01:09.059069  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:09.059119  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:09.079065  158374 logs.go:282] 0 containers: []
	W1222 23:01:09.079080  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:09.079123  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:09.098129  158374 logs.go:282] 0 containers: []
	W1222 23:01:09.098148  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:09.098194  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:09.116949  158374 logs.go:282] 0 containers: []
	W1222 23:01:09.116965  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:09.116974  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:09.116984  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:09.172761  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:09.165842   27961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:09.166366   27961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:09.167907   27961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:09.168380   27961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:09.169881   27961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:09.165842   27961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:09.166366   27961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:09.167907   27961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:09.168380   27961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:09.169881   27961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:09.172771  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:09.172782  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:09.203094  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:09.203115  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:09.231372  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:09.231389  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:09.279710  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:09.279730  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:11.797972  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:11.809159  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:11.828406  158374 logs.go:282] 0 containers: []
	W1222 23:01:11.828423  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:11.828474  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:11.847173  158374 logs.go:282] 0 containers: []
	W1222 23:01:11.847194  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:11.847248  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:11.866123  158374 logs.go:282] 0 containers: []
	W1222 23:01:11.866141  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:11.866191  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:11.885374  158374 logs.go:282] 0 containers: []
	W1222 23:01:11.885388  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:11.885428  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:11.904372  158374 logs.go:282] 0 containers: []
	W1222 23:01:11.904386  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:11.904429  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:11.923420  158374 logs.go:282] 0 containers: []
	W1222 23:01:11.923437  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:11.923496  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:11.942294  158374 logs.go:282] 0 containers: []
	W1222 23:01:11.942332  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:11.942344  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:11.942356  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:11.999449  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:11.992239   28121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:11.992816   28121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:11.994388   28121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:11.994771   28121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:11.996307   28121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:11.992239   28121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:11.992816   28121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:11.994388   28121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:11.994771   28121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:11.996307   28121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:11.999465  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:11.999478  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:12.029498  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:12.029524  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:12.058708  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:12.058726  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:12.106818  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:12.106837  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:14.624265  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:14.635338  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:14.654466  158374 logs.go:282] 0 containers: []
	W1222 23:01:14.654481  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:14.654523  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:14.674801  158374 logs.go:282] 0 containers: []
	W1222 23:01:14.674817  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:14.674860  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:14.694258  158374 logs.go:282] 0 containers: []
	W1222 23:01:14.694275  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:14.694322  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:14.714916  158374 logs.go:282] 0 containers: []
	W1222 23:01:14.714932  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:14.714980  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:14.734141  158374 logs.go:282] 0 containers: []
	W1222 23:01:14.734155  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:14.734198  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:14.754093  158374 logs.go:282] 0 containers: []
	W1222 23:01:14.754108  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:14.754162  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:14.773451  158374 logs.go:282] 0 containers: []
	W1222 23:01:14.773468  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:14.773481  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:14.773496  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:14.830750  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:14.823428   28278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:14.824043   28278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:14.825689   28278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:14.826129   28278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:14.827703   28278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:14.823428   28278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:14.824043   28278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:14.825689   28278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:14.826129   28278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:14.827703   28278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:14.830760  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:14.830770  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:14.859787  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:14.859804  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:14.888611  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:14.888631  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:14.936097  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:14.936118  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:17.453191  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:17.464732  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:17.485010  158374 logs.go:282] 0 containers: []
	W1222 23:01:17.485025  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:17.485072  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:17.505952  158374 logs.go:282] 0 containers: []
	W1222 23:01:17.505969  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:17.506027  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:17.528761  158374 logs.go:282] 0 containers: []
	W1222 23:01:17.528776  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:17.528821  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:17.549296  158374 logs.go:282] 0 containers: []
	W1222 23:01:17.549312  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:17.549376  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:17.568100  158374 logs.go:282] 0 containers: []
	W1222 23:01:17.568117  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:17.568167  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:17.587017  158374 logs.go:282] 0 containers: []
	W1222 23:01:17.587034  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:17.587086  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:17.606020  158374 logs.go:282] 0 containers: []
	W1222 23:01:17.606036  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:17.606045  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:17.606061  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:17.621414  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:17.621430  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:17.677909  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:17.670771   28438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:17.671262   28438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:17.672895   28438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:17.673340   28438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:17.674869   28438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:17.670771   28438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:17.671262   28438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:17.672895   28438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:17.673340   28438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:17.674869   28438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:17.677925  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:17.677935  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:17.708117  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:17.708138  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:17.739554  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:17.739574  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:20.288948  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:20.299810  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:20.318929  158374 logs.go:282] 0 containers: []
	W1222 23:01:20.318945  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:20.319006  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:20.337556  158374 logs.go:282] 0 containers: []
	W1222 23:01:20.337573  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:20.337641  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:20.355705  158374 logs.go:282] 0 containers: []
	W1222 23:01:20.355718  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:20.355760  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:20.373672  158374 logs.go:282] 0 containers: []
	W1222 23:01:20.373686  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:20.373726  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:20.392616  158374 logs.go:282] 0 containers: []
	W1222 23:01:20.392631  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:20.392674  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:20.411253  158374 logs.go:282] 0 containers: []
	W1222 23:01:20.411270  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:20.411322  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:20.429537  158374 logs.go:282] 0 containers: []
	W1222 23:01:20.429552  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:20.429563  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:20.429575  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:20.445080  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:20.445098  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:20.501506  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:20.494080   28582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:20.494697   28582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:20.496376   28582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:20.496836   28582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:20.498365   28582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:20.494080   28582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:20.494697   28582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:20.496376   28582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:20.496836   28582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:20.498365   28582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:20.501520  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:20.501535  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:20.531907  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:20.531925  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:20.560530  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:20.560547  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:23.108846  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:23.119974  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:23.138613  158374 logs.go:282] 0 containers: []
	W1222 23:01:23.138631  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:23.138686  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:23.156921  158374 logs.go:282] 0 containers: []
	W1222 23:01:23.156935  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:23.156975  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:23.175153  158374 logs.go:282] 0 containers: []
	W1222 23:01:23.175166  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:23.175209  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:23.193263  158374 logs.go:282] 0 containers: []
	W1222 23:01:23.193295  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:23.193356  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:23.212207  158374 logs.go:282] 0 containers: []
	W1222 23:01:23.212221  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:23.212263  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:23.230929  158374 logs.go:282] 0 containers: []
	W1222 23:01:23.230945  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:23.231005  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:23.249646  158374 logs.go:282] 0 containers: []
	W1222 23:01:23.249659  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:23.249669  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:23.249679  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:23.277729  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:23.277745  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:23.324063  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:23.324082  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:23.339406  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:23.339425  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:23.394875  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:23.387312   28761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:23.387859   28761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:23.389847   28761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:23.390576   28761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:23.392042   28761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:23.387312   28761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:23.387859   28761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:23.389847   28761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:23.390576   28761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:23.392042   28761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:23.394885  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:23.394895  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:25.926075  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:25.937168  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:25.956016  158374 logs.go:282] 0 containers: []
	W1222 23:01:25.956028  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:25.956074  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:25.974143  158374 logs.go:282] 0 containers: []
	W1222 23:01:25.974159  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:25.974202  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:25.992434  158374 logs.go:282] 0 containers: []
	W1222 23:01:25.992449  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:25.992505  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:26.010399  158374 logs.go:282] 0 containers: []
	W1222 23:01:26.010415  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:26.010474  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:26.029958  158374 logs.go:282] 0 containers: []
	W1222 23:01:26.029974  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:26.030039  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:26.048962  158374 logs.go:282] 0 containers: []
	W1222 23:01:26.048977  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:26.049032  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:26.067563  158374 logs.go:282] 0 containers: []
	W1222 23:01:26.067581  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:26.067605  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:26.067627  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:26.115800  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:26.115818  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:26.131143  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:26.131160  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:26.187038  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:26.179580   28901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:26.180759   28901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:26.181222   28901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:26.182757   28901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:26.183141   28901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:26.179580   28901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:26.180759   28901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:26.181222   28901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:26.182757   28901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:26.183141   28901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:26.187049  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:26.187059  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:26.217787  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:26.217807  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:28.746733  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:28.757902  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:28.777575  158374 logs.go:282] 0 containers: []
	W1222 23:01:28.777608  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:28.777664  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:28.798343  158374 logs.go:282] 0 containers: []
	W1222 23:01:28.798363  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:28.798420  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:28.816843  158374 logs.go:282] 0 containers: []
	W1222 23:01:28.816859  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:28.816904  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:28.835441  158374 logs.go:282] 0 containers: []
	W1222 23:01:28.835458  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:28.835507  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:28.855127  158374 logs.go:282] 0 containers: []
	W1222 23:01:28.855139  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:28.855195  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:28.873368  158374 logs.go:282] 0 containers: []
	W1222 23:01:28.873381  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:28.873422  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:28.890543  158374 logs.go:282] 0 containers: []
	W1222 23:01:28.890556  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:28.890565  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:28.890575  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:28.918533  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:28.918553  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:28.963368  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:28.963389  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:28.978933  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:28.978951  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:29.035020  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:29.027770   29072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:29.028286   29072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:29.029825   29072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:29.030294   29072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:29.031906   29072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:29.027770   29072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:29.028286   29072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:29.029825   29072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:29.030294   29072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:29.031906   29072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:29.035033  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:29.035044  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:31.565580  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:31.576645  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:31.596094  158374 logs.go:282] 0 containers: []
	W1222 23:01:31.596110  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:31.596173  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:31.615329  158374 logs.go:282] 0 containers: []
	W1222 23:01:31.615345  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:31.615394  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:31.634047  158374 logs.go:282] 0 containers: []
	W1222 23:01:31.634065  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:31.634117  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:31.653553  158374 logs.go:282] 0 containers: []
	W1222 23:01:31.653567  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:31.653626  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:31.671338  158374 logs.go:282] 0 containers: []
	W1222 23:01:31.671354  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:31.671411  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:31.689466  158374 logs.go:282] 0 containers: []
	W1222 23:01:31.689482  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:31.689536  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:31.707731  158374 logs.go:282] 0 containers: []
	W1222 23:01:31.707744  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:31.707753  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:31.707761  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:31.759802  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:31.759828  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:31.776809  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:31.776828  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:31.833983  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:31.827071   29214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:31.827553   29214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:31.829051   29214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:31.829424   29214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:31.830959   29214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:31.827071   29214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:31.827553   29214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:31.829051   29214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:31.829424   29214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:31.830959   29214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:31.833996  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:31.834010  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:31.862984  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:31.863002  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:34.392435  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:34.403543  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:34.422799  158374 logs.go:282] 0 containers: []
	W1222 23:01:34.422814  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:34.422857  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:34.442014  158374 logs.go:282] 0 containers: []
	W1222 23:01:34.442029  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:34.442076  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:34.460995  158374 logs.go:282] 0 containers: []
	W1222 23:01:34.461007  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:34.461056  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:34.479671  158374 logs.go:282] 0 containers: []
	W1222 23:01:34.479688  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:34.479737  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:34.498097  158374 logs.go:282] 0 containers: []
	W1222 23:01:34.498113  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:34.498169  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:34.515924  158374 logs.go:282] 0 containers: []
	W1222 23:01:34.515939  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:34.515985  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:34.534433  158374 logs.go:282] 0 containers: []
	W1222 23:01:34.534447  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:34.534455  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:34.534466  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:34.589495  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:34.581716   29354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:34.582244   29354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:34.583869   29354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:34.584305   29354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:34.586383   29354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:34.581716   29354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:34.582244   29354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:34.583869   29354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:34.584305   29354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:34.586383   29354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:34.589505  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:34.589515  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:34.619333  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:34.619352  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:34.647369  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:34.647385  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:34.692228  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:34.692249  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:37.207836  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:37.218829  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:37.237894  158374 logs.go:282] 0 containers: []
	W1222 23:01:37.237908  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:37.237953  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:37.256000  158374 logs.go:282] 0 containers: []
	W1222 23:01:37.256012  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:37.256054  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:37.275097  158374 logs.go:282] 0 containers: []
	W1222 23:01:37.275112  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:37.275161  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:37.294044  158374 logs.go:282] 0 containers: []
	W1222 23:01:37.294059  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:37.294099  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:37.312979  158374 logs.go:282] 0 containers: []
	W1222 23:01:37.312995  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:37.313041  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:37.331662  158374 logs.go:282] 0 containers: []
	W1222 23:01:37.331675  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:37.331718  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:37.350197  158374 logs.go:282] 0 containers: []
	W1222 23:01:37.350212  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:37.350223  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:37.350233  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:37.397328  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:37.397345  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:37.412835  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:37.412852  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:37.468475  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:37.461747   29517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:37.462295   29517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:37.463832   29517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:37.464240   29517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:37.465745   29517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:37.461747   29517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:37.462295   29517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:37.463832   29517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:37.464240   29517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:37.465745   29517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:37.468487  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:37.468498  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:37.497915  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:37.497934  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:40.027516  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:40.039728  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:40.058705  158374 logs.go:282] 0 containers: []
	W1222 23:01:40.058719  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:40.058762  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:40.076167  158374 logs.go:282] 0 containers: []
	W1222 23:01:40.076184  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:40.076231  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:40.095983  158374 logs.go:282] 0 containers: []
	W1222 23:01:40.095996  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:40.096037  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:40.114657  158374 logs.go:282] 0 containers: []
	W1222 23:01:40.114670  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:40.114717  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:40.133015  158374 logs.go:282] 0 containers: []
	W1222 23:01:40.133028  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:40.133070  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:40.152127  158374 logs.go:282] 0 containers: []
	W1222 23:01:40.152140  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:40.152187  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:40.169569  158374 logs.go:282] 0 containers: []
	W1222 23:01:40.169583  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:40.169622  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:40.169636  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:40.184978  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:40.184992  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:40.238860  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:40.231964   29674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:40.232541   29674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:40.234087   29674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:40.234491   29674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:40.236049   29674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:40.231964   29674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:40.232541   29674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:40.234087   29674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:40.234491   29674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:40.236049   29674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:40.238870  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:40.238879  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:40.268146  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:40.268165  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:40.295931  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:40.295946  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:42.843708  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:42.854816  158374 kubeadm.go:602] duration metric: took 4m1.499731906s to restartPrimaryControlPlane
	W1222 23:01:42.854901  158374 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1222 23:01:42.854978  158374 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1222 23:01:43.257733  158374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 23:01:43.270528  158374 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 23:01:43.278400  158374 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 23:01:43.278439  158374 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 23:01:43.285901  158374 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 23:01:43.285910  158374 kubeadm.go:158] found existing configuration files:
	
	I1222 23:01:43.285947  158374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1222 23:01:43.293882  158374 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 23:01:43.293919  158374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 23:01:43.300825  158374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1222 23:01:43.307941  158374 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 23:01:43.307983  158374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 23:01:43.314784  158374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1222 23:01:43.321699  158374 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 23:01:43.321728  158374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 23:01:43.328397  158374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1222 23:01:43.335272  158374 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 23:01:43.335301  158374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 23:01:43.341949  158374 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 23:01:43.376034  158374 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 23:01:43.376102  158374 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 23:01:43.445165  158374 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 23:01:43.445236  158374 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1222 23:01:43.445264  158374 kubeadm.go:319] OS: Linux
	I1222 23:01:43.445301  158374 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 23:01:43.445350  158374 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 23:01:43.445392  158374 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 23:01:43.445455  158374 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 23:01:43.445494  158374 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 23:01:43.445551  158374 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 23:01:43.445588  158374 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 23:01:43.445673  158374 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 23:01:43.445736  158374 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 23:01:43.500219  158374 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 23:01:43.500396  158374 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 23:01:43.500507  158374 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 23:01:43.513368  158374 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 23:01:43.515533  158374 out.go:252]   - Generating certificates and keys ...
	I1222 23:01:43.515634  158374 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 23:01:43.515681  158374 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 23:01:43.515765  158374 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1222 23:01:43.515820  158374 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1222 23:01:43.515882  158374 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1222 23:01:43.515924  158374 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1222 23:01:43.515975  158374 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1222 23:01:43.516024  158374 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1222 23:01:43.516083  158374 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1222 23:01:43.516151  158374 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1222 23:01:43.516181  158374 kubeadm.go:319] [certs] Using the existing "sa" key
	I1222 23:01:43.516264  158374 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 23:01:43.648060  158374 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 23:01:43.775539  158374 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 23:01:43.806099  158374 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 23:01:43.912171  158374 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 23:01:44.004004  158374 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 23:01:44.004296  158374 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 23:01:44.006366  158374 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 23:01:44.008239  158374 out.go:252]   - Booting up control plane ...
	I1222 23:01:44.008308  158374 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 23:01:44.008365  158374 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 23:01:44.009041  158374 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 23:01:44.026974  158374 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 23:01:44.027088  158374 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 23:01:44.033700  158374 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 23:01:44.033947  158374 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 23:01:44.034017  158374 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 23:01:44.136722  158374 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 23:01:44.136861  158374 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 23:05:44.137086  158374 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000408574s
	I1222 23:05:44.137129  158374 kubeadm.go:319] 
	I1222 23:05:44.137190  158374 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 23:05:44.137216  158374 kubeadm.go:319] 	- The kubelet is not running
	I1222 23:05:44.137303  158374 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 23:05:44.137309  158374 kubeadm.go:319] 
	I1222 23:05:44.137438  158374 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 23:05:44.137492  158374 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 23:05:44.137528  158374 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 23:05:44.137531  158374 kubeadm.go:319] 
	I1222 23:05:44.140373  158374 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1222 23:05:44.140752  158374 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 23:05:44.140849  158374 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 23:05:44.141147  158374 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1222 23:05:44.141156  158374 kubeadm.go:319] 
	I1222 23:05:44.141230  158374 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1222 23:05:44.141360  158374 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000408574s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1222 23:05:44.141451  158374 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1222 23:05:44.555201  158374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 23:05:44.567871  158374 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 23:05:44.567915  158374 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 23:05:44.575883  158374 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 23:05:44.575897  158374 kubeadm.go:158] found existing configuration files:
	
	I1222 23:05:44.575941  158374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1222 23:05:44.583486  158374 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 23:05:44.583527  158374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 23:05:44.590649  158374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1222 23:05:44.597769  158374 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 23:05:44.597806  158374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 23:05:44.604798  158374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1222 23:05:44.611986  158374 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 23:05:44.612034  158374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 23:05:44.619193  158374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1222 23:05:44.626515  158374 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 23:05:44.626555  158374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 23:05:44.633629  158374 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 23:05:44.735033  158374 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1222 23:05:44.735554  158374 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 23:05:44.792296  158374 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 23:09:45.398895  158374 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1222 23:09:45.398970  158374 kubeadm.go:319] 
	I1222 23:09:45.399090  158374 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1222 23:09:45.401586  158374 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 23:09:45.401671  158374 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 23:09:45.401745  158374 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 23:09:45.401791  158374 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1222 23:09:45.401820  158374 kubeadm.go:319] OS: Linux
	I1222 23:09:45.401885  158374 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 23:09:45.401955  158374 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 23:09:45.402023  158374 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 23:09:45.402088  158374 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 23:09:45.402152  158374 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 23:09:45.402201  158374 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 23:09:45.402235  158374 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 23:09:45.402274  158374 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 23:09:45.402309  158374 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 23:09:45.402367  158374 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 23:09:45.402449  158374 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 23:09:45.402536  158374 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 23:09:45.402585  158374 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 23:09:45.404239  158374 out.go:252]   - Generating certificates and keys ...
	I1222 23:09:45.404310  158374 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 23:09:45.404360  158374 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 23:09:45.404421  158374 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1222 23:09:45.404472  158374 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1222 23:09:45.404530  158374 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1222 23:09:45.404569  158374 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1222 23:09:45.404650  158374 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1222 23:09:45.404705  158374 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1222 23:09:45.404761  158374 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1222 23:09:45.404827  158374 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1222 23:09:45.404867  158374 kubeadm.go:319] [certs] Using the existing "sa" key
	I1222 23:09:45.404910  158374 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 23:09:45.404948  158374 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 23:09:45.404989  158374 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 23:09:45.405029  158374 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 23:09:45.405075  158374 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 23:09:45.405115  158374 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 23:09:45.405181  158374 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 23:09:45.405240  158374 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 23:09:45.406503  158374 out.go:252]   - Booting up control plane ...
	I1222 23:09:45.406585  158374 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 23:09:45.406677  158374 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 23:09:45.406738  158374 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 23:09:45.406832  158374 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 23:09:45.406905  158374 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 23:09:45.406993  158374 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 23:09:45.407062  158374 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 23:09:45.407092  158374 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 23:09:45.407211  158374 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 23:09:45.407300  158374 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 23:09:45.407348  158374 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000888774s
	I1222 23:09:45.407350  158374 kubeadm.go:319] 
	I1222 23:09:45.407409  158374 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 23:09:45.407435  158374 kubeadm.go:319] 	- The kubelet is not running
	I1222 23:09:45.407521  158374 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 23:09:45.407524  158374 kubeadm.go:319] 
	I1222 23:09:45.407628  158374 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 23:09:45.407652  158374 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 23:09:45.407675  158374 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 23:09:45.407697  158374 kubeadm.go:319] 
	I1222 23:09:45.407753  158374 kubeadm.go:403] duration metric: took 12m4.079935698s to StartCluster
	I1222 23:09:45.407873  158374 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 23:09:45.407938  158374 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 23:09:45.445003  158374 cri.go:96] found id: ""
	I1222 23:09:45.445021  158374 logs.go:282] 0 containers: []
	W1222 23:09:45.445027  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:09:45.445038  158374 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 23:09:45.445084  158374 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 23:09:45.470772  158374 cri.go:96] found id: ""
	I1222 23:09:45.470788  158374 logs.go:282] 0 containers: []
	W1222 23:09:45.470794  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:09:45.470799  158374 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 23:09:45.470845  158374 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 23:09:45.495903  158374 cri.go:96] found id: ""
	I1222 23:09:45.495920  158374 logs.go:282] 0 containers: []
	W1222 23:09:45.495927  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:09:45.495933  158374 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 23:09:45.495983  158374 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 23:09:45.523926  158374 cri.go:96] found id: ""
	I1222 23:09:45.523943  158374 logs.go:282] 0 containers: []
	W1222 23:09:45.523952  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:09:45.523960  158374 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 23:09:45.524021  158374 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 23:09:45.551137  158374 cri.go:96] found id: ""
	I1222 23:09:45.551153  158374 logs.go:282] 0 containers: []
	W1222 23:09:45.551164  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:09:45.551171  158374 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 23:09:45.551226  158374 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 23:09:45.576565  158374 cri.go:96] found id: ""
	I1222 23:09:45.576583  158374 logs.go:282] 0 containers: []
	W1222 23:09:45.576611  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:09:45.576621  158374 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 23:09:45.576676  158374 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 23:09:45.601969  158374 cri.go:96] found id: ""
	I1222 23:09:45.601983  158374 logs.go:282] 0 containers: []
	W1222 23:09:45.601991  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:09:45.602003  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:09:45.602018  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:09:45.650169  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:09:45.650193  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:09:45.665853  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:09:45.665870  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:09:45.722796  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:09:45.715772   37272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:09:45.716296   37272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:09:45.717847   37272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:09:45.718263   37272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:09:45.719823   37272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:09:45.715772   37272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:09:45.716296   37272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:09:45.717847   37272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:09:45.718263   37272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:09:45.719823   37272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:09:45.722812  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:09:45.722824  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:09:45.751752  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:09:45.751775  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 23:09:45.780185  158374 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000888774s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1222 23:09:45.780232  158374 out.go:285] * 
	W1222 23:09:45.780327  158374 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000888774s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 23:09:45.780349  158374 out.go:285] * 
	W1222 23:09:45.780644  158374 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 23:09:45.784145  158374 out.go:203] 
	W1222 23:09:45.785152  158374 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000888774s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 23:09:45.785198  158374 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1222 23:09:45.785226  158374 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1222 23:09:45.786470  158374 out.go:203] 
	
	
	==> Docker <==
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.415844375Z" level=info msg="Loading containers: done."
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.427335656Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.427378701Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.427416508Z" level=info msg="Initializing buildkit"
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.445627752Z" level=info msg="Completed buildkit initialization"
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.450569156Z" level=info msg="Daemon has completed initialization"
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.450666768Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.450705582Z" level=info msg="API listen on /run/docker.sock"
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.450724731Z" level=info msg="API listen on [::]:2376"
	Dec 22 22:57:39 functional-384766 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 22 22:57:39 functional-384766 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 22 22:57:39 functional-384766 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 22 22:57:39 functional-384766 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 22 22:57:39 functional-384766 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Start docker client with request timeout 0s"
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Loaded network plugin cni"
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 22 22:57:39 functional-384766 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:09:55.111835   38622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:09:55.112384   38622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:09:55.113672   38622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:09:55.114110   38622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:09:55.115709   38622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff da 9e 7f a3 27 cb 08 06
	[  +0.239045] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6e eb f7 fd 0a 48 08 06
	[  +0.170967] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 16 5a dc 65 fc cc 08 06
	[Dec22 22:37] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 66 cb ee 90 55 2b 08 06
	[  +0.000450] IPv4: martian source 10.244.0.32 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff be 43 50 0c dd 15 08 06
	[  +0.000658] IPv4: martian source 10.244.0.32 from 10.244.0.7, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e 41 3c 76 23 2b 08 06
	[  +1.709294] IPv4: martian source 10.244.0.31 from 10.244.0.26, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff be b6 30 85 5f 4e 08 06
	[  +0.532867] IPv4: martian source 10.244.0.26 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff be 43 50 0c dd 15 08 06
	[Dec22 22:39] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 46 b7 49 09 f9 e0 08 06
	[  +0.006417] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1e e5 c5 4f 67 2b 08 06
	[Dec22 22:40] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 22 2e 10 70 70 25 08 06
	[Dec22 22:41] IPv4: martian source 10.244.0.1 from 10.244.0.6, on dev eth0
	[  +0.000034] ll header: 00000000: ff ff ff ff ff ff ee d7 ae 32 ba c5 08 06
	[Dec22 22:42] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 95 cb 2f 8e 91 08 06
	
	
	==> kernel <==
	 23:09:55 up  2:52,  0 user,  load average: 0.73, 0.24, 0.37
	Linux functional-384766 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 23:09:51 functional-384766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 23:09:52 functional-384766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 329.
	Dec 22 23:09:52 functional-384766 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:09:52 functional-384766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:09:52 functional-384766 kubelet[38084]: E1222 23:09:52.308510   38084 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 23:09:52 functional-384766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 23:09:52 functional-384766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 23:09:52 functional-384766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 330.
	Dec 22 23:09:52 functional-384766 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:09:52 functional-384766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:09:53 functional-384766 kubelet[38352]: E1222 23:09:53.060489   38352 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 23:09:53 functional-384766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 23:09:53 functional-384766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 23:09:53 functional-384766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 331.
	Dec 22 23:09:53 functional-384766 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:09:53 functional-384766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:09:53 functional-384766 kubelet[38436]: E1222 23:09:53.768275   38436 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 23:09:53 functional-384766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 23:09:53 functional-384766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 23:09:54 functional-384766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 332.
	Dec 22 23:09:54 functional-384766 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:09:54 functional-384766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:09:54 functional-384766 kubelet[38494]: E1222 23:09:54.551372   38494 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 23:09:54 functional-384766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 23:09:54 functional-384766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-384766 -n functional-384766
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-384766 -n functional-384766: exit status 2 (340.295922ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-384766" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect (2.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim (241.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1222 23:10:16.904430   75803 retry.go:84] will retry after 6.6s: Temporary Error: Get "http://10.109.193.124": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1222 23:10:31.033493   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/addons-268945/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1222 23:10:33.536709   75803 retry.go:84] will retry after 8.5s: Temporary Error: Get "http://10.109.193.124": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1222 23:10:52.065494   75803 retry.go:84] will retry after 9.9s: Temporary Error: Get "http://10.109.193.124": context deadline exceeded (Client.Timeout exceeded while awaiting headers) (duplicate log for 48.2s)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1222 23:11:11.990028   75803 retry.go:84] will retry after 20.5s: Temporary Error: Get "http://10.109.193.124": context deadline exceeded (Client.Timeout exceeded while awaiting headers) (duplicate log for 1m8.2s)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1222 23:11:30.660433   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-580825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1222 23:13:34.088080   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/addons-268945/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:50: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:50: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-384766 -n functional-384766
functional_test_pvc_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-384766 -n functional-384766: exit status 2 (311.320118ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
functional_test_pvc_test.go:50: status error: exit status 2 (may be ok)
functional_test_pvc_test.go:50: "functional-384766" apiserver is not running, skipping kubectl commands (state="Stopped")
functional_test_pvc_test.go:51: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-384766
helpers_test.go:244: (dbg) docker inspect functional-384766:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c",
	        "Created": "2025-12-22T22:43:03.818900502Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 134904,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T22:43:03.847527913Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9a87e850a5e640dd3e5f71477885272b970ba271e3722be8bebbe0157f704ffd",
	        "ResolvConfPath": "/var/lib/docker/containers/e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c/hostname",
	        "HostsPath": "/var/lib/docker/containers/e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c/hosts",
	        "LogPath": "/var/lib/docker/containers/e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c/e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c-json.log",
	        "Name": "/functional-384766",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-384766:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-384766",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c",
	                "LowerDir": "/var/lib/docker/overlay2/3e3d10c0ae87018d46767d6a2bb62611a8b9a288f6938e75c60f3cd57119d4bf-init/diff:/var/lib/docker/overlay2/c57dd1a41102d99c4ed6be3c60b871435428bd2cea6a3d8d172f0a67527ba009/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3e3d10c0ae87018d46767d6a2bb62611a8b9a288f6938e75c60f3cd57119d4bf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3e3d10c0ae87018d46767d6a2bb62611a8b9a288f6938e75c60f3cd57119d4bf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3e3d10c0ae87018d46767d6a2bb62611a8b9a288f6938e75c60f3cd57119d4bf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-384766",
	                "Source": "/var/lib/docker/volumes/functional-384766/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-384766",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-384766",
	                "name.minikube.sigs.k8s.io": "functional-384766",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d6f65d275ad1e1cfaea153f23b0c094464e089c27de9a12387045fa2c863e00e",
	            "SandboxKey": "/var/run/docker/netns/d6f65d275ad1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-384766": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1b177601c4f3a252e4feb1553da3a4110e40d5b9ed2bd5de6789f2bc9f8f5c2b",
	                    "EndpointID": "2c787f98c5d836612c102f7592dc2eccfef09327c2a6cadf1319fd6559b5eca8",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "d6:90:04:78:9b:e3",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-384766",
	                        "e126b999cc06"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-384766 -n functional-384766
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-384766 -n functional-384766: exit status 2 (293.970279ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                                           ARGS                                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ license        │                                                                                                                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 22 Dec 25 23:10 UTC │ 22 Dec 25 23:10 UTC │
	│ ssh            │ functional-384766 ssh findmnt -T /mount3                                                                                                                                                 │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:10 UTC │ 22 Dec 25 23:10 UTC │
	│ mount          │ -p functional-384766 --kill=true                                                                                                                                                         │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:10 UTC │                     │
	│ update-context │ functional-384766 update-context --alsologtostderr -v=2                                                                                                                                  │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:10 UTC │ 22 Dec 25 23:10 UTC │
	│ image          │ functional-384766 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-384766 --alsologtostderr                                                              │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:10 UTC │ 22 Dec 25 23:10 UTC │
	│ update-context │ functional-384766 update-context --alsologtostderr -v=2                                                                                                                                  │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:10 UTC │ 22 Dec 25 23:10 UTC │
	│ update-context │ functional-384766 update-context --alsologtostderr -v=2                                                                                                                                  │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:10 UTC │ 22 Dec 25 23:10 UTC │
	│ image          │ functional-384766 image ls                                                                                                                                                               │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:10 UTC │ 22 Dec 25 23:10 UTC │
	│ image          │ functional-384766 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-384766 --alsologtostderr                                                              │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:10 UTC │ 22 Dec 25 23:10 UTC │
	│ image          │ functional-384766 image ls                                                                                                                                                               │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:10 UTC │ 22 Dec 25 23:10 UTC │
	│ image          │ functional-384766 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-384766 --alsologtostderr                                                              │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:10 UTC │ 22 Dec 25 23:10 UTC │
	│ image          │ functional-384766 image ls                                                                                                                                                               │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:10 UTC │ 22 Dec 25 23:10 UTC │
	│ image          │ functional-384766 image save ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-384766 /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:10 UTC │ 22 Dec 25 23:10 UTC │
	│ image          │ functional-384766 image rm ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-384766 --alsologtostderr                                                                         │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:10 UTC │ 22 Dec 25 23:10 UTC │
	│ image          │ functional-384766 image ls                                                                                                                                                               │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:10 UTC │ 22 Dec 25 23:10 UTC │
	│ image          │ functional-384766 image load /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr                                                                     │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:10 UTC │ 22 Dec 25 23:10 UTC │
	│ image          │ functional-384766 image ls                                                                                                                                                               │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:10 UTC │ 22 Dec 25 23:10 UTC │
	│ image          │ functional-384766 image save --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-384766 --alsologtostderr                                                              │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:10 UTC │ 22 Dec 25 23:10 UTC │
	│ image          │ functional-384766 image ls --format short --alsologtostderr                                                                                                                              │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:10 UTC │                     │
	│ image          │ functional-384766 image ls --format yaml --alsologtostderr                                                                                                                               │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:10 UTC │ 22 Dec 25 23:10 UTC │
	│ ssh            │ functional-384766 ssh pgrep buildkitd                                                                                                                                                    │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:10 UTC │                     │
	│ image          │ functional-384766 image ls --format json --alsologtostderr                                                                                                                               │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:10 UTC │ 22 Dec 25 23:10 UTC │
	│ image          │ functional-384766 image ls --format table --alsologtostderr                                                                                                                              │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:10 UTC │ 22 Dec 25 23:10 UTC │
	│ image          │ functional-384766 image build -t localhost/my-image:functional-384766 testdata/build --alsologtostderr                                                                                   │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:10 UTC │ 22 Dec 25 23:10 UTC │
	│ image          │ functional-384766 image ls                                                                                                                                                               │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:10 UTC │ 22 Dec 25 23:10 UTC │
	└────────────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 23:10:00
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 23:10:00.471084  189065 out.go:360] Setting OutFile to fd 1 ...
	I1222 23:10:00.471344  189065 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 23:10:00.471353  189065 out.go:374] Setting ErrFile to fd 2...
	I1222 23:10:00.471358  189065 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 23:10:00.471534  189065 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-72233/.minikube/bin
	I1222 23:10:00.471985  189065 out.go:368] Setting JSON to false
	I1222 23:10:00.472909  189065 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":10340,"bootTime":1766434660,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1222 23:10:00.472959  189065 start.go:143] virtualization: kvm guest
	I1222 23:10:00.474587  189065 out.go:179] * [functional-384766] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1222 23:10:00.475694  189065 out.go:179]   - MINIKUBE_LOCATION=22301
	I1222 23:10:00.475703  189065 notify.go:221] Checking for updates...
	I1222 23:10:00.477739  189065 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 23:10:00.478849  189065 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1222 23:10:00.479809  189065 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-72233/.minikube
	I1222 23:10:00.480818  189065 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1222 23:10:00.482103  189065 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 23:10:00.483637  189065 config.go:182] Loaded profile config "functional-384766": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1222 23:10:00.484147  189065 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 23:10:00.508451  189065 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1222 23:10:00.508633  189065 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 23:10:00.567029  189065 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-12-22 23:10:00.557282865 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1222 23:10:00.567130  189065 docker.go:319] overlay module found
	I1222 23:10:00.568775  189065 out.go:179] * Using the docker driver based on existing profile
	I1222 23:10:00.569819  189065 start.go:309] selected driver: docker
	I1222 23:10:00.569832  189065 start.go:928] validating driver "docker" against &{Name:functional-384766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-384766 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 23:10:00.569903  189065 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 23:10:00.570021  189065 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 23:10:00.622749  189065 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-12-22 23:10:00.61321899 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be removed
by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1222 23:10:00.623382  189065 cni.go:84] Creating CNI manager for ""
	I1222 23:10:00.623470  189065 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1222 23:10:00.623528  189065 start.go:353] cluster config:
	{Name:functional-384766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-384766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 23:10:00.625075  189065 out.go:179] * dry-run validation complete!
	
	
	==> Docker <==
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.427335656Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.427378701Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.427416508Z" level=info msg="Initializing buildkit"
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.445627752Z" level=info msg="Completed buildkit initialization"
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.450569156Z" level=info msg="Daemon has completed initialization"
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.450666768Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.450705582Z" level=info msg="API listen on /run/docker.sock"
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.450724731Z" level=info msg="API listen on [::]:2376"
	Dec 22 22:57:39 functional-384766 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 22 22:57:39 functional-384766 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 22 22:57:39 functional-384766 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 22 22:57:39 functional-384766 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 22 22:57:39 functional-384766 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Start docker client with request timeout 0s"
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Loaded network plugin cni"
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 22 22:57:39 functional-384766 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 22 23:10:15 functional-384766 dockerd[17979]: time="2025-12-22T23:10:15.313637828Z" level=info msg="sbJoin: gwep4 ''->'0bc9fecd70e0', gwep6 ''->''"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:13:54.558878   44091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:13:54.559328   44091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:13:54.560399   44091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:13:54.560832   44091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:13:54.562304   44091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff da 9e 7f a3 27 cb 08 06
	[  +0.239045] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6e eb f7 fd 0a 48 08 06
	[  +0.170967] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 16 5a dc 65 fc cc 08 06
	[Dec22 22:37] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 66 cb ee 90 55 2b 08 06
	[  +0.000450] IPv4: martian source 10.244.0.32 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff be 43 50 0c dd 15 08 06
	[  +0.000658] IPv4: martian source 10.244.0.32 from 10.244.0.7, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e 41 3c 76 23 2b 08 06
	[  +1.709294] IPv4: martian source 10.244.0.31 from 10.244.0.26, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff be b6 30 85 5f 4e 08 06
	[  +0.532867] IPv4: martian source 10.244.0.26 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff be 43 50 0c dd 15 08 06
	[Dec22 22:39] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 46 b7 49 09 f9 e0 08 06
	[  +0.006417] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1e e5 c5 4f 67 2b 08 06
	[Dec22 22:40] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 22 2e 10 70 70 25 08 06
	[Dec22 22:41] IPv4: martian source 10.244.0.1 from 10.244.0.6, on dev eth0
	[  +0.000034] ll header: 00000000: ff ff ff ff ff ff ee d7 ae 32 ba c5 08 06
	[Dec22 22:42] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 95 cb 2f 8e 91 08 06
	
	
	==> kernel <==
	 23:13:54 up  2:56,  0 user,  load average: 0.02, 0.13, 0.29
	Linux functional-384766 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 23:13:51 functional-384766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 23:13:52 functional-384766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 649.
	Dec 22 23:13:52 functional-384766 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:13:52 functional-384766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:13:52 functional-384766 kubelet[43910]: E1222 23:13:52.284300   43910 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 23:13:52 functional-384766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 23:13:52 functional-384766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 23:13:52 functional-384766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 650.
	Dec 22 23:13:52 functional-384766 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:13:52 functional-384766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:13:53 functional-384766 kubelet[43921]: E1222 23:13:53.034110   43921 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 23:13:53 functional-384766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 23:13:53 functional-384766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 23:13:53 functional-384766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 651.
	Dec 22 23:13:53 functional-384766 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:13:53 functional-384766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:13:53 functional-384766 kubelet[43945]: E1222 23:13:53.724855   43945 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 23:13:53 functional-384766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 23:13:53 functional-384766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 23:13:54 functional-384766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 652.
	Dec 22 23:13:54 functional-384766 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:13:54 functional-384766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:13:54 functional-384766 kubelet[44072]: E1222 23:13:54.541321   44072 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 23:13:54 functional-384766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 23:13:54 functional-384766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-384766 -n functional-384766
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-384766 -n functional-384766: exit status 2 (289.638122ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-384766" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim (241.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL (2.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL
functional_test.go:1803: (dbg) Run:  kubectl --context functional-384766 replace --force -f testdata/mysql.yaml
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-384766 replace --force -f testdata/mysql.yaml: exit status 1 (65.717988ms)

                                                
                                                
** stderr ** 
	error when deleting "testdata/mysql.yaml": Delete "https://192.168.49.2:8441/api/v1/namespaces/default/services/mysql": dial tcp 192.168.49.2:8441: connect: connection refused
	error when deleting "testdata/mysql.yaml": Delete "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments/mysql": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1805: failed to kubectl replace mysql: args "kubectl --context functional-384766 replace --force -f testdata/mysql.yaml" failed: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-384766
helpers_test.go:244: (dbg) docker inspect functional-384766:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c",
	        "Created": "2025-12-22T22:43:03.818900502Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 134904,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T22:43:03.847527913Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9a87e850a5e640dd3e5f71477885272b970ba271e3722be8bebbe0157f704ffd",
	        "ResolvConfPath": "/var/lib/docker/containers/e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c/hostname",
	        "HostsPath": "/var/lib/docker/containers/e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c/hosts",
	        "LogPath": "/var/lib/docker/containers/e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c/e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c-json.log",
	        "Name": "/functional-384766",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-384766:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-384766",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c",
	                "LowerDir": "/var/lib/docker/overlay2/3e3d10c0ae87018d46767d6a2bb62611a8b9a288f6938e75c60f3cd57119d4bf-init/diff:/var/lib/docker/overlay2/c57dd1a41102d99c4ed6be3c60b871435428bd2cea6a3d8d172f0a67527ba009/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3e3d10c0ae87018d46767d6a2bb62611a8b9a288f6938e75c60f3cd57119d4bf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3e3d10c0ae87018d46767d6a2bb62611a8b9a288f6938e75c60f3cd57119d4bf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3e3d10c0ae87018d46767d6a2bb62611a8b9a288f6938e75c60f3cd57119d4bf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-384766",
	                "Source": "/var/lib/docker/volumes/functional-384766/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-384766",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-384766",
	                "name.minikube.sigs.k8s.io": "functional-384766",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d6f65d275ad1e1cfaea153f23b0c094464e089c27de9a12387045fa2c863e00e",
	            "SandboxKey": "/var/run/docker/netns/d6f65d275ad1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-384766": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1b177601c4f3a252e4feb1553da3a4110e40d5b9ed2bd5de6789f2bc9f8f5c2b",
	                    "EndpointID": "2c787f98c5d836612c102f7592dc2eccfef09327c2a6cadf1319fd6559b5eca8",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "d6:90:04:78:9b:e3",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-384766",
	                        "e126b999cc06"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-384766 -n functional-384766
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-384766 -n functional-384766: exit status 2 (367.352646ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                            ARGS                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount          │ -p functional-580825 --kill=true                                                                                            │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │                     │
	│ ssh            │ functional-580825 ssh echo hello                                                                                            │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ ssh            │ functional-580825 ssh cat /etc/hostname                                                                                     │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ config         │ functional-384766 config get cpus                                                                                           │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │ 22 Dec 25 23:09 UTC │
	│ start          │ -p functional-580825 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker                 │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │                     │
	│ tunnel         │ functional-580825 tunnel --alsologtostderr                                                                                  │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │                     │
	│ tunnel         │ functional-580825 tunnel --alsologtostderr                                                                                  │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │                     │
	│ tunnel         │ functional-580825 tunnel --alsologtostderr                                                                                  │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │                     │
	│ start          │ -p functional-580825 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker                 │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │                     │
	│ start          │ -p functional-580825 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker                           │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │                     │
	│ service        │ functional-580825 service list                                                                                              │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ dashboard      │ --url --port 0 -p functional-580825 --alsologtostderr -v=1                                                                  │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ service        │ functional-580825 service list -o json                                                                                      │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ service        │ functional-580825 service --namespace=default --https --url hello-node                                                      │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ service        │ functional-580825 service hello-node --url --format={{.IP}}                                                                 │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ service        │ functional-580825 service hello-node --url                                                                                  │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ update-context │ functional-580825 update-context --alsologtostderr -v=2                                                                     │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ update-context │ functional-580825 update-context --alsologtostderr -v=2                                                                     │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ update-context │ functional-580825 update-context --alsologtostderr -v=2                                                                     │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ image          │ functional-580825 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-580825 --alsologtostderr │ functional-580825 │ jenkins │ v1.37.0 │ 22 Dec 25 22:42 UTC │ 22 Dec 25 22:42 UTC │
	│ ssh            │ functional-384766 ssh sudo cat /usr/share/ca-certificates/75803.pem                                                         │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │ 22 Dec 25 23:09 UTC │
	│ config         │ functional-384766 config get cpus                                                                                           │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │                     │
	│ cp             │ functional-384766 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                          │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │ 22 Dec 25 23:09 UTC │
	│ ssh            │ functional-384766 ssh sudo cat /etc/ssl/certs/51391683.0                                                                    │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │                     │
	│ ssh            │ functional-384766 ssh -n functional-384766 sudo cat /home/docker/cp-test.txt                                                │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 22:57:36
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 22:57:36.254392  158374 out.go:360] Setting OutFile to fd 1 ...
	I1222 22:57:36.254700  158374 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 22:57:36.254705  158374 out.go:374] Setting ErrFile to fd 2...
	I1222 22:57:36.254708  158374 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 22:57:36.254883  158374 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-72233/.minikube/bin
	I1222 22:57:36.255420  158374 out.go:368] Setting JSON to false
	I1222 22:57:36.256374  158374 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":9596,"bootTime":1766434660,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1222 22:57:36.256458  158374 start.go:143] virtualization: kvm guest
	I1222 22:57:36.258393  158374 out.go:179] * [functional-384766] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1222 22:57:36.259562  158374 out.go:179]   - MINIKUBE_LOCATION=22301
	I1222 22:57:36.259584  158374 notify.go:221] Checking for updates...
	I1222 22:57:36.261710  158374 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 22:57:36.262944  158374 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1222 22:57:36.264212  158374 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-72233/.minikube
	I1222 22:57:36.265355  158374 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1222 22:57:36.266271  158374 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 22:57:36.267661  158374 config.go:182] Loaded profile config "functional-384766": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1222 22:57:36.267820  158374 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 22:57:36.296187  158374 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1222 22:57:36.296285  158374 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 22:57:36.350829  158374 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:57 SystemTime:2025-12-22 22:57:36.341376778 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1222 22:57:36.350930  158374 docker.go:319] overlay module found
	I1222 22:57:36.352570  158374 out.go:179] * Using the docker driver based on existing profile
	I1222 22:57:36.353588  158374 start.go:309] selected driver: docker
	I1222 22:57:36.353611  158374 start.go:928] validating driver "docker" against &{Name:functional-384766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-384766 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 22:57:36.353719  158374 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 22:57:36.353830  158374 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 22:57:36.406492  158374 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:57 SystemTime:2025-12-22 22:57:36.397760538 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1222 22:57:36.407140  158374 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1222 22:57:36.407175  158374 cni.go:84] Creating CNI manager for ""
	I1222 22:57:36.407232  158374 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1222 22:57:36.407286  158374 start.go:353] cluster config:
	{Name:functional-384766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-384766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 22:57:36.408996  158374 out.go:179] * Starting "functional-384766" primary control-plane node in "functional-384766" cluster
	I1222 22:57:36.410078  158374 cache.go:134] Beginning downloading kic base image for docker with docker
	I1222 22:57:36.411119  158374 out.go:179] * Pulling base image v0.0.48-1766394456-22288 ...
	I1222 22:57:36.412129  158374 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1222 22:57:36.412159  158374 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4
	I1222 22:57:36.412174  158374 cache.go:65] Caching tarball of preloaded images
	I1222 22:57:36.412242  158374 preload.go:251] Found /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1222 22:57:36.412248  158374 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on docker
	I1222 22:57:36.412244  158374 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local docker daemon
	I1222 22:57:36.412341  158374 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/config.json ...
	I1222 22:57:36.431941  158374 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local docker daemon, skipping pull
	I1222 22:57:36.431955  158374 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 exists in daemon, skipping load
	I1222 22:57:36.431969  158374 cache.go:243] Successfully downloaded all kic artifacts
	I1222 22:57:36.431996  158374 start.go:360] acquireMachinesLock for functional-384766: {Name:mk956fe60c71d3d96aa218ecf73d6e39f6ab1bf3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 22:57:36.432059  158374 start.go:364] duration metric: took 40.356µs to acquireMachinesLock for "functional-384766"
	I1222 22:57:36.432072  158374 start.go:96] Skipping create...Using existing machine configuration
	I1222 22:57:36.432076  158374 fix.go:54] fixHost starting: 
	I1222 22:57:36.432265  158374 cli_runner.go:164] Run: docker container inspect functional-384766 --format={{.State.Status}}
	I1222 22:57:36.449079  158374 fix.go:112] recreateIfNeeded on functional-384766: state=Running err=<nil>
	W1222 22:57:36.449100  158374 fix.go:138] unexpected machine state, will restart: <nil>
	I1222 22:57:36.450671  158374 out.go:252] * Updating the running docker "functional-384766" container ...
	I1222 22:57:36.450705  158374 machine.go:94] provisionDockerMachine start ...
	I1222 22:57:36.450764  158374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:57:36.467607  158374 main.go:144] libmachine: Using SSH client type: native
	I1222 22:57:36.467835  158374 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:57:36.467841  158374 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 22:57:36.608433  158374 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-384766
	
	I1222 22:57:36.608449  158374 ubuntu.go:182] provisioning hostname "functional-384766"
	I1222 22:57:36.608504  158374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:57:36.626300  158374 main.go:144] libmachine: Using SSH client type: native
	I1222 22:57:36.626509  158374 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:57:36.626516  158374 main.go:144] libmachine: About to run SSH command:
	sudo hostname functional-384766 && echo "functional-384766" | sudo tee /etc/hostname
	I1222 22:57:36.777413  158374 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-384766
	
	I1222 22:57:36.777486  158374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:57:36.795160  158374 main.go:144] libmachine: Using SSH client type: native
	I1222 22:57:36.795380  158374 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:57:36.795396  158374 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-384766' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-384766/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-384766' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 22:57:36.935922  158374 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 22:57:36.935942  158374 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22301-72233/.minikube CaCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22301-72233/.minikube}
	I1222 22:57:36.935957  158374 ubuntu.go:190] setting up certificates
	I1222 22:57:36.935965  158374 provision.go:84] configureAuth start
	I1222 22:57:36.936023  158374 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-384766
	I1222 22:57:36.954219  158374 provision.go:143] copyHostCerts
	I1222 22:57:36.954277  158374 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem, removing ...
	I1222 22:57:36.954291  158374 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem
	I1222 22:57:36.954367  158374 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem (1082 bytes)
	I1222 22:57:36.954466  158374 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem, removing ...
	I1222 22:57:36.954469  158374 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem
	I1222 22:57:36.954495  158374 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem (1123 bytes)
	I1222 22:57:36.954569  158374 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem, removing ...
	I1222 22:57:36.954572  158374 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem
	I1222 22:57:36.954631  158374 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem (1679 bytes)
	I1222 22:57:36.954687  158374 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem org=jenkins.functional-384766 san=[127.0.0.1 192.168.49.2 functional-384766 localhost minikube]
	I1222 22:57:36.981147  158374 provision.go:177] copyRemoteCerts
	I1222 22:57:36.981202  158374 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 22:57:36.981239  158374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:57:37.000716  158374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
	I1222 22:57:37.101499  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 22:57:37.118740  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 22:57:37.135018  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1222 22:57:37.151214  158374 provision.go:87] duration metric: took 215.234679ms to configureAuth
	I1222 22:57:37.151234  158374 ubuntu.go:206] setting minikube options for container-runtime
	I1222 22:57:37.151390  158374 config.go:182] Loaded profile config "functional-384766": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1222 22:57:37.151430  158374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:57:37.168491  158374 main.go:144] libmachine: Using SSH client type: native
	I1222 22:57:37.168730  158374 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:57:37.168737  158374 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1222 22:57:37.310361  158374 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1222 22:57:37.310376  158374 ubuntu.go:71] root file system type: overlay
	I1222 22:57:37.310489  158374 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1222 22:57:37.310547  158374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:57:37.329095  158374 main.go:144] libmachine: Using SSH client type: native
	I1222 22:57:37.329306  158374 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:57:37.329369  158374 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1222 22:57:37.478917  158374 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1222 22:57:37.478994  158374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:57:37.496454  158374 main.go:144] libmachine: Using SSH client type: native
	I1222 22:57:37.496687  158374 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1222 22:57:37.496699  158374 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1222 22:57:37.641628  158374 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 22:57:37.641654  158374 machine.go:97] duration metric: took 1.190941144s to provisionDockerMachine
	I1222 22:57:37.641665  158374 start.go:293] postStartSetup for "functional-384766" (driver="docker")
	I1222 22:57:37.641676  158374 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 22:57:37.641727  158374 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 22:57:37.641757  158374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:57:37.659069  158374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
	I1222 22:57:37.759899  158374 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 22:57:37.763912  158374 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 22:57:37.763929  158374 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 22:57:37.763939  158374 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-72233/.minikube/addons for local assets ...
	I1222 22:57:37.763985  158374 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-72233/.minikube/files for local assets ...
	I1222 22:57:37.764057  158374 filesync.go:149] local asset: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem -> 758032.pem in /etc/ssl/certs
	I1222 22:57:37.764125  158374 filesync.go:149] local asset: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/test/nested/copy/75803/hosts -> hosts in /etc/test/nested/copy/75803
	I1222 22:57:37.764158  158374 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/75803
	I1222 22:57:37.772288  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem --> /etc/ssl/certs/758032.pem (1708 bytes)
	I1222 22:57:37.789657  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/test/nested/copy/75803/hosts --> /etc/test/nested/copy/75803/hosts (40 bytes)
	I1222 22:57:37.805946  158374 start.go:296] duration metric: took 164.267669ms for postStartSetup
	I1222 22:57:37.806019  158374 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 22:57:37.806054  158374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:57:37.823397  158374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
	I1222 22:57:37.920964  158374 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 22:57:37.925567  158374 fix.go:56] duration metric: took 1.493483875s for fixHost
	I1222 22:57:37.925585  158374 start.go:83] releasing machines lock for "functional-384766", held for 1.493518865s
	I1222 22:57:37.925676  158374 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-384766
	I1222 22:57:37.944340  158374 ssh_runner.go:195] Run: cat /version.json
	I1222 22:57:37.944379  158374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:57:37.944410  158374 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 22:57:37.944475  158374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
	I1222 22:57:37.962270  158374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
	I1222 22:57:37.963480  158374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
	I1222 22:57:38.111745  158374 ssh_runner.go:195] Run: systemctl --version
	I1222 22:57:38.118245  158374 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 22:57:38.122628  158374 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 22:57:38.122679  158374 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 22:57:38.130349  158374 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1222 22:57:38.130362  158374 start.go:496] detecting cgroup driver to use...
	I1222 22:57:38.130390  158374 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 22:57:38.130482  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 22:57:38.143844  158374 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1222 22:57:38.152204  158374 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1222 22:57:38.160833  158374 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1222 22:57:38.160878  158374 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1222 22:57:38.168944  158374 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 22:57:38.176827  158374 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1222 22:57:38.185035  158374 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 22:57:38.193068  158374 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 22:57:38.200733  158374 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1222 22:57:38.208877  158374 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1222 22:57:38.217062  158374 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1222 22:57:38.225212  158374 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 22:57:38.231954  158374 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 22:57:38.238562  158374 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 22:57:38.319900  158374 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1222 22:57:38.394735  158374 start.go:496] detecting cgroup driver to use...
	I1222 22:57:38.394777  158374 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 22:57:38.394829  158374 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1222 22:57:38.408181  158374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 22:57:38.420724  158374 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1222 22:57:38.437862  158374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 22:57:38.450387  158374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1222 22:57:38.462197  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 22:57:38.475419  158374 ssh_runner.go:195] Run: which cri-dockerd
	I1222 22:57:38.478805  158374 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1222 22:57:38.485878  158374 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1222 22:57:38.497638  158374 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1222 22:57:38.579501  158374 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1222 22:57:38.662636  158374 docker.go:578] configuring docker to use "cgroupfs" as cgroup driver...
	I1222 22:57:38.662750  158374 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1222 22:57:38.675412  158374 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1222 22:57:38.686668  158374 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 22:57:38.767093  158374 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1222 22:57:39.452892  158374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 22:57:39.465276  158374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1222 22:57:39.477001  158374 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1222 22:57:39.491722  158374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1222 22:57:39.503501  158374 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1222 22:57:39.584904  158374 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1222 22:57:39.672762  158374 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 22:57:39.748726  158374 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1222 22:57:39.768653  158374 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1222 22:57:39.780104  158374 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 22:57:39.862790  158374 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1222 22:57:39.934384  158374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1222 22:57:39.948030  158374 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1222 22:57:39.948084  158374 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1222 22:57:39.952002  158374 start.go:564] Will wait 60s for crictl version
	I1222 22:57:39.952049  158374 ssh_runner.go:195] Run: which crictl
	I1222 22:57:39.955397  158374 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 22:57:39.979213  158374 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1222 22:57:39.979270  158374 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1222 22:57:40.004367  158374 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1222 22:57:40.031792  158374 out.go:252] * Preparing Kubernetes v1.35.0-rc.1 on Docker 29.1.3 ...
	I1222 22:57:40.031863  158374 cli_runner.go:164] Run: docker network inspect functional-384766 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 22:57:40.047933  158374 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1222 22:57:40.053698  158374 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1222 22:57:40.054726  158374 kubeadm.go:884] updating cluster {Name:functional-384766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-384766 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1222 22:57:40.054846  158374 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1222 22:57:40.054890  158374 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1222 22:57:40.076020  158374 docker.go:694] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-384766
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1222 22:57:40.076046  158374 docker.go:624] Images already preloaded, skipping extraction
	I1222 22:57:40.076111  158374 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1222 22:57:40.096347  158374 docker.go:694] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-384766
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1222 22:57:40.096366  158374 cache_images.go:86] Images are preloaded, skipping loading
	I1222 22:57:40.096374  158374 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 docker true true} ...
	I1222 22:57:40.096468  158374 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-384766 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-384766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 22:57:40.096517  158374 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1222 22:57:40.147179  158374 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1222 22:57:40.147206  158374 cni.go:84] Creating CNI manager for ""
	I1222 22:57:40.147226  158374 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1222 22:57:40.147236  158374 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 22:57:40.147256  158374 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-384766 NodeName:functional-384766 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 22:57:40.147375  158374 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-384766"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 22:57:40.147436  158374 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 22:57:40.155394  158374 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 22:57:40.155439  158374 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 22:57:40.163036  158374 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1222 22:57:40.175169  158374 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 22:57:40.187093  158374 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2073 bytes)
	I1222 22:57:40.198818  158374 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1222 22:57:40.202222  158374 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 22:57:40.283126  158374 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 22:57:40.747886  158374 certs.go:69] Setting up /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766 for IP: 192.168.49.2
	I1222 22:57:40.747899  158374 certs.go:195] generating shared ca certs ...
	I1222 22:57:40.747914  158374 certs.go:227] acquiring lock for ca certs: {Name:mk952cc8302daab7c0050aedd5db4002f6808128 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 22:57:40.748072  158374 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.key
	I1222 22:57:40.748113  158374 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.key
	I1222 22:57:40.748119  158374 certs.go:257] generating profile certs ...
	I1222 22:57:40.748199  158374 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.key
	I1222 22:57:40.748236  158374 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/apiserver.key.c9e079a8
	I1222 22:57:40.748278  158374 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/proxy-client.key
	I1222 22:57:40.748397  158374 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803.pem (1338 bytes)
	W1222 22:57:40.748423  158374 certs.go:480] ignoring /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803_empty.pem, impossibly tiny 0 bytes
	I1222 22:57:40.748429  158374 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem (1675 bytes)
	I1222 22:57:40.748451  158374 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem (1082 bytes)
	I1222 22:57:40.748470  158374 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem (1123 bytes)
	I1222 22:57:40.748489  158374 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem (1679 bytes)
	I1222 22:57:40.748525  158374 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem (1708 bytes)
	I1222 22:57:40.749053  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 22:57:40.768237  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 22:57:40.787559  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 22:57:40.804276  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 22:57:40.820613  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 22:57:40.836790  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 22:57:40.852839  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 22:57:40.869050  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1222 22:57:40.885231  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803.pem --> /usr/share/ca-certificates/75803.pem (1338 bytes)
	I1222 22:57:40.901347  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem --> /usr/share/ca-certificates/758032.pem (1708 bytes)
	I1222 22:57:40.917338  158374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 22:57:40.933332  158374 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 22:57:40.944903  158374 ssh_runner.go:195] Run: openssl version
	I1222 22:57:40.950515  158374 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/758032.pem
	I1222 22:57:40.957071  158374 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/758032.pem /etc/ssl/certs/758032.pem
	I1222 22:57:40.963749  158374 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/758032.pem
	I1222 22:57:40.966999  158374 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 22:42 /usr/share/ca-certificates/758032.pem
	I1222 22:57:40.967032  158374 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/758032.pem
	I1222 22:57:41.000342  158374 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 22:57:41.007579  158374 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 22:57:41.014450  158374 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 22:57:41.021442  158374 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 22:57:41.024853  158374 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 22:33 /usr/share/ca-certificates/minikubeCA.pem
	I1222 22:57:41.024902  158374 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 22:57:41.058138  158374 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 22:57:41.065135  158374 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/75803.pem
	I1222 22:57:41.071858  158374 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/75803.pem /etc/ssl/certs/75803.pem
	I1222 22:57:41.078672  158374 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75803.pem
	I1222 22:57:41.082051  158374 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 22:42 /usr/share/ca-certificates/75803.pem
	I1222 22:57:41.082083  158374 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75803.pem
	I1222 22:57:41.115012  158374 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 22:57:41.122326  158374 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 22:57:41.125872  158374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1222 22:57:41.158840  158374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1222 22:57:41.191689  158374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1222 22:57:41.224669  158374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1222 22:57:41.258802  158374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1222 22:57:41.292531  158374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1222 22:57:41.327828  158374 kubeadm.go:401] StartCluster: {Name:functional-384766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-384766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 22:57:41.327941  158374 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1222 22:57:41.347229  158374 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 22:57:41.355058  158374 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1222 22:57:41.355067  158374 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1222 22:57:41.355102  158374 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1222 22:57:41.362198  158374 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1222 22:57:41.362672  158374 kubeconfig.go:125] found "functional-384766" server: "https://192.168.49.2:8441"
	I1222 22:57:41.363809  158374 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1222 22:57:41.371022  158374 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-22 22:43:13.034628184 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-22 22:57:40.197478933 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1222 22:57:41.371029  158374 kubeadm.go:1161] stopping kube-system containers ...
	I1222 22:57:41.371066  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1222 22:57:41.389715  158374 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1222 22:57:41.415695  158374 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 22:57:41.423304  158374 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec 22 22:47 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 22 22:47 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 22 22:47 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 22 22:47 /etc/kubernetes/scheduler.conf
	
	I1222 22:57:41.423364  158374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1222 22:57:41.430717  158374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1222 22:57:41.437811  158374 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1222 22:57:41.437848  158374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 22:57:41.444879  158374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1222 22:57:41.452191  158374 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1222 22:57:41.452233  158374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 22:57:41.459225  158374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1222 22:57:41.466383  158374 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1222 22:57:41.466418  158374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 22:57:41.473427  158374 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 22:57:41.480724  158374 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1222 22:57:41.518575  158374 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1222 22:57:41.974225  158374 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1222 22:57:42.135961  158374 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1222 22:57:42.183844  158374 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1222 22:57:42.223254  158374 api_server.go:52] waiting for apiserver process to appear ...
	I1222 22:57:42.223318  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:42.723474  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:43.223549  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:43.724244  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:44.223849  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:44.724026  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:45.223499  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:45.723832  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:46.223744  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:46.723529  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:47.224208  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:47.723932  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:48.223584  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:48.724285  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:49.224200  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:49.723863  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:50.223734  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:50.724424  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:51.224429  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:51.724246  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:52.223705  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:52.724265  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:53.223569  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:53.724236  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:54.224306  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:54.724058  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:55.223766  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:55.724443  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:56.224475  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:56.724242  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:57.223801  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:57.723648  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:58.224456  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:58.724111  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:59.224088  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:57:59.723871  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:00.223787  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:00.723546  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:01.224166  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:01.723833  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:02.224349  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:02.724090  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:03.223629  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:03.724404  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:04.223783  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:04.724330  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:05.223503  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:05.724088  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:06.224003  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:06.724375  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:07.223508  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:07.724225  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:08.224384  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:08.724300  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:09.224073  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:09.723806  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:10.223613  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:10.724450  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:11.224438  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:11.724384  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:12.224307  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:12.723407  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:13.224265  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:13.724010  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:14.223745  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:14.723548  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:15.223894  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:15.723495  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:16.223471  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:16.724428  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:17.224173  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:17.723800  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:18.223536  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:18.724376  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:19.224018  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:19.723833  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:20.223461  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:20.723797  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:21.223581  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:21.723470  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:22.224221  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:22.723735  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:23.223505  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:23.723726  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:24.223516  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:24.724413  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:25.223835  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:25.723691  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:26.223672  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:26.723588  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:27.223568  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:27.723458  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:28.224226  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:28.724079  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:29.223830  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:29.723697  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:30.224456  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:30.724136  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:31.223750  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:31.723578  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:32.223414  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:32.724025  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:33.224291  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:33.724443  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:34.224315  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:34.724019  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:35.223687  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:35.723472  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:36.224212  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:36.724077  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:37.223750  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:37.723438  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:38.223515  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:38.723503  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:39.224337  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:39.723503  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:40.224133  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:40.723853  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:41.223668  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:41.723695  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:42.223527  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:58:42.242498  158374 logs.go:282] 0 containers: []
	W1222 22:58:42.242530  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:58:42.242576  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:58:42.263682  158374 logs.go:282] 0 containers: []
	W1222 22:58:42.263696  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:58:42.263747  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:58:42.284235  158374 logs.go:282] 0 containers: []
	W1222 22:58:42.284250  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:58:42.284330  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:58:42.303204  158374 logs.go:282] 0 containers: []
	W1222 22:58:42.303219  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:58:42.303263  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:58:42.321387  158374 logs.go:282] 0 containers: []
	W1222 22:58:42.321404  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:58:42.321461  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:58:42.340277  158374 logs.go:282] 0 containers: []
	W1222 22:58:42.340290  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:58:42.340333  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:58:42.359009  158374 logs.go:282] 0 containers: []
	W1222 22:58:42.359025  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:58:42.359034  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:58:42.359044  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:58:42.407304  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:58:42.407323  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:58:42.423167  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:58:42.423184  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:58:42.478018  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:58:42.471043   19939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:42.471587   19939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:42.473144   19939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:42.473549   19939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:42.474977   19939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:58:42.471043   19939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:42.471587   19939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:42.473144   19939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:42.473549   19939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:42.474977   19939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:58:42.478032  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:58:42.478050  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:58:42.508140  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:58:42.508159  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:58:45.047948  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:45.058851  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:58:45.078438  158374 logs.go:282] 0 containers: []
	W1222 22:58:45.078457  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:58:45.078506  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:58:45.096664  158374 logs.go:282] 0 containers: []
	W1222 22:58:45.096678  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:58:45.096729  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:58:45.114982  158374 logs.go:282] 0 containers: []
	W1222 22:58:45.114995  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:58:45.115033  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:58:45.132907  158374 logs.go:282] 0 containers: []
	W1222 22:58:45.132920  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:58:45.132960  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:58:45.151352  158374 logs.go:282] 0 containers: []
	W1222 22:58:45.151368  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:58:45.151409  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:58:45.169708  158374 logs.go:282] 0 containers: []
	W1222 22:58:45.169725  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:58:45.169767  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:58:45.187775  158374 logs.go:282] 0 containers: []
	W1222 22:58:45.187790  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:58:45.187802  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:58:45.187814  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:58:45.242776  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:58:45.235974   20088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:45.236524   20088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:45.238023   20088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:45.238438   20088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:45.239937   20088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:58:45.235974   20088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:45.236524   20088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:45.238023   20088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:45.238438   20088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:45.239937   20088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:58:45.242790  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:58:45.242800  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:58:45.273873  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:58:45.273892  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:58:45.303522  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:58:45.303541  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:58:45.351682  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:58:45.351702  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:58:47.869586  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:47.880760  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:58:47.899543  158374 logs.go:282] 0 containers: []
	W1222 22:58:47.899560  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:58:47.899617  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:58:47.917954  158374 logs.go:282] 0 containers: []
	W1222 22:58:47.917970  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:58:47.918017  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:58:47.936207  158374 logs.go:282] 0 containers: []
	W1222 22:58:47.936224  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:58:47.936269  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:58:47.954310  158374 logs.go:282] 0 containers: []
	W1222 22:58:47.954328  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:58:47.954376  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:58:47.971746  158374 logs.go:282] 0 containers: []
	W1222 22:58:47.971762  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:58:47.971806  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:58:47.989993  158374 logs.go:282] 0 containers: []
	W1222 22:58:47.990008  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:58:47.990054  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:58:48.008188  158374 logs.go:282] 0 containers: []
	W1222 22:58:48.008204  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:58:48.008215  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:58:48.008227  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:58:48.056174  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:58:48.056192  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:58:48.071128  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:58:48.071143  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:58:48.124584  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:58:48.117861   20249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:48.118352   20249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:48.119971   20249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:48.120348   20249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:48.121842   20249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:58:48.117861   20249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:48.118352   20249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:48.119971   20249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:48.120348   20249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:48.121842   20249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:58:48.124621  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:58:48.124635  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:58:48.155889  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:58:48.155907  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:58:50.685742  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:50.696961  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:58:50.716371  158374 logs.go:282] 0 containers: []
	W1222 22:58:50.716385  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:58:50.716430  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:58:50.734780  158374 logs.go:282] 0 containers: []
	W1222 22:58:50.734798  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:58:50.734842  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:58:50.753152  158374 logs.go:282] 0 containers: []
	W1222 22:58:50.753169  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:58:50.753213  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:58:50.771281  158374 logs.go:282] 0 containers: []
	W1222 22:58:50.771296  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:58:50.771338  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:58:50.788814  158374 logs.go:282] 0 containers: []
	W1222 22:58:50.788826  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:58:50.788872  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:58:50.806768  158374 logs.go:282] 0 containers: []
	W1222 22:58:50.806781  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:58:50.806837  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:58:50.824539  158374 logs.go:282] 0 containers: []
	W1222 22:58:50.824552  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:58:50.824561  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:58:50.824581  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:58:50.873346  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:58:50.873363  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:58:50.888174  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:58:50.888188  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:58:50.942890  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:58:50.935978   20403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:50.936526   20403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:50.938077   20403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:50.938499   20403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:50.939984   20403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:58:50.935978   20403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:50.936526   20403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:50.938077   20403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:50.938499   20403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:50.939984   20403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:58:50.942904  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:58:50.942915  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:58:50.971205  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:58:50.971223  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:58:53.500770  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:53.512538  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:58:53.535794  158374 logs.go:282] 0 containers: []
	W1222 22:58:53.535812  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:58:53.535872  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:58:53.554667  158374 logs.go:282] 0 containers: []
	W1222 22:58:53.554684  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:58:53.554739  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:58:53.573251  158374 logs.go:282] 0 containers: []
	W1222 22:58:53.573267  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:58:53.573317  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:58:53.591664  158374 logs.go:282] 0 containers: []
	W1222 22:58:53.591686  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:58:53.591739  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:58:53.610128  158374 logs.go:282] 0 containers: []
	W1222 22:58:53.610141  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:58:53.610183  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:58:53.628089  158374 logs.go:282] 0 containers: []
	W1222 22:58:53.628105  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:58:53.628148  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:58:53.645890  158374 logs.go:282] 0 containers: []
	W1222 22:58:53.645908  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:58:53.645919  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:58:53.645932  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:58:53.692043  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:58:53.692062  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:58:53.707092  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:58:53.707107  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:58:53.761308  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:58:53.754291   20560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:53.754841   20560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:53.756416   20560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:53.756807   20560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:53.758250   20560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:58:53.754291   20560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:53.754841   20560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:53.756416   20560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:53.756807   20560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:53.758250   20560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:58:53.761320  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:58:53.761331  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:58:53.789713  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:58:53.789730  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:58:56.318819  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:56.329790  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:58:56.348795  158374 logs.go:282] 0 containers: []
	W1222 22:58:56.348808  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:58:56.348851  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:58:56.366850  158374 logs.go:282] 0 containers: []
	W1222 22:58:56.366866  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:58:56.366932  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:58:56.385468  158374 logs.go:282] 0 containers: []
	W1222 22:58:56.385483  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:58:56.385530  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:58:56.404330  158374 logs.go:282] 0 containers: []
	W1222 22:58:56.404345  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:58:56.404406  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:58:56.422533  158374 logs.go:282] 0 containers: []
	W1222 22:58:56.422549  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:58:56.422631  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:58:56.440667  158374 logs.go:282] 0 containers: []
	W1222 22:58:56.440681  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:58:56.440742  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:58:56.459073  158374 logs.go:282] 0 containers: []
	W1222 22:58:56.459088  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:58:56.459099  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:58:56.459113  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:58:56.506766  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:58:56.506783  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:58:56.523645  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:58:56.523667  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:58:56.580517  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:58:56.573333   20718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:56.573875   20718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:56.575514   20718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:56.576017   20718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:56.577576   20718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:58:56.573333   20718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:56.573875   20718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:56.575514   20718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:56.576017   20718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:56.577576   20718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:58:56.580531  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:58:56.580543  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:58:56.610571  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:58:56.610588  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:58:59.140001  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:58:59.151059  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:58:59.169787  158374 logs.go:282] 0 containers: []
	W1222 22:58:59.169801  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:58:59.169840  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:58:59.187907  158374 logs.go:282] 0 containers: []
	W1222 22:58:59.187919  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:58:59.187959  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:58:59.206755  158374 logs.go:282] 0 containers: []
	W1222 22:58:59.206770  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:58:59.206811  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:58:59.225123  158374 logs.go:282] 0 containers: []
	W1222 22:58:59.225139  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:58:59.225179  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:58:59.243400  158374 logs.go:282] 0 containers: []
	W1222 22:58:59.243414  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:58:59.243453  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:58:59.261475  158374 logs.go:282] 0 containers: []
	W1222 22:58:59.261492  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:58:59.261556  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:58:59.279819  158374 logs.go:282] 0 containers: []
	W1222 22:58:59.279834  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:58:59.279844  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:58:59.279855  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:58:59.295024  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:58:59.295046  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:58:59.349874  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:58:59.343012   20862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:59.343571   20862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:59.345099   20862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:59.345548   20862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:59.347033   20862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:58:59.343012   20862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:59.343571   20862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:59.345099   20862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:59.345548   20862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:58:59.347033   20862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:58:59.349889  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:58:59.349902  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:58:59.381356  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:58:59.381378  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:58:59.409144  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:58:59.409160  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:01.955340  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:01.966166  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:01.985075  158374 logs.go:282] 0 containers: []
	W1222 22:59:01.985088  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:01.985135  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:02.003681  158374 logs.go:282] 0 containers: []
	W1222 22:59:02.003695  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:02.003748  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:02.022064  158374 logs.go:282] 0 containers: []
	W1222 22:59:02.022081  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:02.022127  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:02.040290  158374 logs.go:282] 0 containers: []
	W1222 22:59:02.040302  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:02.040346  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:02.058109  158374 logs.go:282] 0 containers: []
	W1222 22:59:02.058123  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:02.058167  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:02.076398  158374 logs.go:282] 0 containers: []
	W1222 22:59:02.076415  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:02.076469  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:02.095264  158374 logs.go:282] 0 containers: []
	W1222 22:59:02.095326  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:02.095338  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:02.095350  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:02.140655  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:02.140678  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:02.156234  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:02.156248  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:02.212079  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:02.205182   21023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:02.205716   21023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:02.207280   21023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:02.207782   21023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:02.209314   21023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:02.205182   21023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:02.205716   21023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:02.207280   21023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:02.207782   21023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:02.209314   21023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:02.212094  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:02.212106  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:02.241399  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:02.241415  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:04.771709  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:04.783605  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:04.802797  158374 logs.go:282] 0 containers: []
	W1222 22:59:04.802811  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:04.802907  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:04.822172  158374 logs.go:282] 0 containers: []
	W1222 22:59:04.822187  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:04.822232  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:04.840265  158374 logs.go:282] 0 containers: []
	W1222 22:59:04.840280  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:04.840320  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:04.858270  158374 logs.go:282] 0 containers: []
	W1222 22:59:04.858287  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:04.858329  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:04.876142  158374 logs.go:282] 0 containers: []
	W1222 22:59:04.876158  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:04.876204  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:04.894156  158374 logs.go:282] 0 containers: []
	W1222 22:59:04.894169  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:04.894209  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:04.912355  158374 logs.go:282] 0 containers: []
	W1222 22:59:04.912373  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:04.912383  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:04.912393  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:04.940312  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:04.940332  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:04.985353  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:04.985370  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:05.000242  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:05.000264  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:05.054276  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:05.047325   21197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:05.047883   21197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:05.049424   21197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:05.049887   21197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:05.051401   21197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:05.047325   21197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:05.047883   21197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:05.049424   21197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:05.049887   21197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:05.051401   21197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:05.054288  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:05.054298  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:07.583327  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:07.594487  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:07.614008  158374 logs.go:282] 0 containers: []
	W1222 22:59:07.614023  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:07.614073  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:07.633345  158374 logs.go:282] 0 containers: []
	W1222 22:59:07.633364  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:07.633410  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:07.651888  158374 logs.go:282] 0 containers: []
	W1222 22:59:07.651900  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:07.651939  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:07.670373  158374 logs.go:282] 0 containers: []
	W1222 22:59:07.670389  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:07.670431  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:07.687752  158374 logs.go:282] 0 containers: []
	W1222 22:59:07.687772  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:07.687819  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:07.707382  158374 logs.go:282] 0 containers: []
	W1222 22:59:07.707397  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:07.707449  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:07.725692  158374 logs.go:282] 0 containers: []
	W1222 22:59:07.725705  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:07.725714  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:07.725724  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:07.741276  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:07.741290  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:07.807688  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:07.800646   21327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:07.801223   21327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:07.802769   21327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:07.803177   21327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:07.804708   21327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:07.800646   21327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:07.801223   21327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:07.802769   21327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:07.803177   21327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:07.804708   21327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:07.807698  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:07.807708  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:07.838193  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:07.838211  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:07.867411  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:07.867429  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:10.417278  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:10.428172  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:10.447192  158374 logs.go:282] 0 containers: []
	W1222 22:59:10.447210  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:10.447268  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:10.465742  158374 logs.go:282] 0 containers: []
	W1222 22:59:10.465755  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:10.465802  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:10.483930  158374 logs.go:282] 0 containers: []
	W1222 22:59:10.483943  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:10.483982  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:10.502550  158374 logs.go:282] 0 containers: []
	W1222 22:59:10.502564  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:10.502631  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:10.521157  158374 logs.go:282] 0 containers: []
	W1222 22:59:10.521170  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:10.521217  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:10.539930  158374 logs.go:282] 0 containers: []
	W1222 22:59:10.539944  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:10.539988  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:10.557819  158374 logs.go:282] 0 containers: []
	W1222 22:59:10.557836  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:10.557847  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:10.557860  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:10.586007  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:10.586023  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:10.630906  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:10.630928  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:10.645969  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:10.645986  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:10.700369  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:10.693704   21501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:10.694182   21501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:10.695738   21501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:10.696170   21501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:10.697667   21501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:10.693704   21501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:10.694182   21501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:10.695738   21501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:10.696170   21501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:10.697667   21501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:10.700383  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:10.700396  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:13.229999  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:13.241245  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:13.261612  158374 logs.go:282] 0 containers: []
	W1222 22:59:13.261629  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:13.261685  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:13.279825  158374 logs.go:282] 0 containers: []
	W1222 22:59:13.279843  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:13.279893  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:13.297933  158374 logs.go:282] 0 containers: []
	W1222 22:59:13.297951  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:13.298008  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:13.316218  158374 logs.go:282] 0 containers: []
	W1222 22:59:13.316235  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:13.316315  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:13.334375  158374 logs.go:282] 0 containers: []
	W1222 22:59:13.334389  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:13.334444  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:13.353104  158374 logs.go:282] 0 containers: []
	W1222 22:59:13.353123  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:13.353179  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:13.371772  158374 logs.go:282] 0 containers: []
	W1222 22:59:13.371791  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:13.371802  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:13.371816  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:13.419777  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:13.419800  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:13.435473  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:13.435489  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:13.490824  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:13.484010   21640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:13.484560   21640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:13.486103   21640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:13.486541   21640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:13.488011   21640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:13.484010   21640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:13.484560   21640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:13.486103   21640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:13.486541   21640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:13.488011   21640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:13.490835  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:13.490848  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:13.519782  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:13.519800  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:16.052715  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:16.064085  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:16.083176  158374 logs.go:282] 0 containers: []
	W1222 22:59:16.083195  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:16.083255  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:16.102468  158374 logs.go:282] 0 containers: []
	W1222 22:59:16.102485  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:16.102532  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:16.121564  158374 logs.go:282] 0 containers: []
	W1222 22:59:16.121580  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:16.121654  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:16.140862  158374 logs.go:282] 0 containers: []
	W1222 22:59:16.140879  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:16.140928  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:16.159281  158374 logs.go:282] 0 containers: []
	W1222 22:59:16.159295  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:16.159347  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:16.177569  158374 logs.go:282] 0 containers: []
	W1222 22:59:16.177606  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:16.177659  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:16.196491  158374 logs.go:282] 0 containers: []
	W1222 22:59:16.196507  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:16.196516  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:16.196526  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:16.225379  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:16.225399  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:16.270312  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:16.270332  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:16.285737  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:16.285752  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:16.339892  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:16.332955   21812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:16.333507   21812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:16.335037   21812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:16.335481   21812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:16.337003   21812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:16.332955   21812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:16.333507   21812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:16.335037   21812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:16.335481   21812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:16.337003   21812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:16.339906  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:16.339924  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:18.870402  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:18.881333  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:18.899917  158374 logs.go:282] 0 containers: []
	W1222 22:59:18.899940  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:18.899987  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:18.918652  158374 logs.go:282] 0 containers: []
	W1222 22:59:18.918666  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:18.918711  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:18.936854  158374 logs.go:282] 0 containers: []
	W1222 22:59:18.936871  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:18.936930  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:18.956082  158374 logs.go:282] 0 containers: []
	W1222 22:59:18.956099  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:18.956148  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:18.974672  158374 logs.go:282] 0 containers: []
	W1222 22:59:18.974690  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:18.974747  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:18.993264  158374 logs.go:282] 0 containers: []
	W1222 22:59:18.993281  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:18.993330  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:19.013308  158374 logs.go:282] 0 containers: []
	W1222 22:59:19.013325  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:19.013335  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:19.013346  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:19.063311  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:19.063330  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:19.078990  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:19.079012  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:19.135746  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:19.127970   21956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:19.128563   21956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:19.130146   21956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:19.130556   21956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:19.132525   21956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:19.127970   21956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:19.128563   21956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:19.130146   21956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:19.130556   21956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:19.132525   21956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:19.135757  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:19.135778  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:19.165331  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:19.165348  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:21.694471  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:21.705412  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:21.724588  158374 logs.go:282] 0 containers: []
	W1222 22:59:21.724617  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:21.724663  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:21.744659  158374 logs.go:282] 0 containers: []
	W1222 22:59:21.744677  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:21.744732  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:21.762841  158374 logs.go:282] 0 containers: []
	W1222 22:59:21.762858  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:21.762913  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:21.782008  158374 logs.go:282] 0 containers: []
	W1222 22:59:21.782023  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:21.782064  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:21.801013  158374 logs.go:282] 0 containers: []
	W1222 22:59:21.801031  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:21.801077  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:21.817861  158374 logs.go:282] 0 containers: []
	W1222 22:59:21.817879  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:21.817936  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:21.836076  158374 logs.go:282] 0 containers: []
	W1222 22:59:21.836093  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:21.836104  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:21.836115  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:21.884827  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:21.884849  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:21.900053  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:21.900069  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:21.955238  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:21.947967   22101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:21.948481   22101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:21.950007   22101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:21.950444   22101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:21.952014   22101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:21.947967   22101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:21.948481   22101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:21.950007   22101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:21.950444   22101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:21.952014   22101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:21.955248  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:21.955258  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:21.984138  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:21.984157  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:24.515104  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:24.526883  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:24.546166  158374 logs.go:282] 0 containers: []
	W1222 22:59:24.546180  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:24.546228  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:24.565305  158374 logs.go:282] 0 containers: []
	W1222 22:59:24.565319  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:24.565361  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:24.584559  158374 logs.go:282] 0 containers: []
	W1222 22:59:24.584572  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:24.584631  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:24.604650  158374 logs.go:282] 0 containers: []
	W1222 22:59:24.604664  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:24.604712  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:24.623346  158374 logs.go:282] 0 containers: []
	W1222 22:59:24.623362  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:24.623412  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:24.642324  158374 logs.go:282] 0 containers: []
	W1222 22:59:24.642343  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:24.642406  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:24.661990  158374 logs.go:282] 0 containers: []
	W1222 22:59:24.662004  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:24.662013  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:24.662024  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:24.677840  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:24.677855  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:24.734271  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:24.726969   22258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:24.727498   22258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:24.729051   22258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:24.729473   22258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:24.731041   22258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:24.726969   22258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:24.727498   22258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:24.729051   22258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:24.729473   22258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:24.731041   22258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:24.734289  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:24.734304  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:24.764562  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:24.764580  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:24.793099  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:24.793115  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:27.340497  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:27.351904  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:27.372400  158374 logs.go:282] 0 containers: []
	W1222 22:59:27.372419  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:27.372472  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:27.392295  158374 logs.go:282] 0 containers: []
	W1222 22:59:27.392312  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:27.392363  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:27.411771  158374 logs.go:282] 0 containers: []
	W1222 22:59:27.411784  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:27.411828  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:27.430497  158374 logs.go:282] 0 containers: []
	W1222 22:59:27.430512  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:27.430558  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:27.449983  158374 logs.go:282] 0 containers: []
	W1222 22:59:27.449999  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:27.450044  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:27.469696  158374 logs.go:282] 0 containers: []
	W1222 22:59:27.469714  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:27.469771  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:27.488685  158374 logs.go:282] 0 containers: []
	W1222 22:59:27.488702  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:27.488715  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:27.488730  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:27.517546  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:27.517564  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:27.564530  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:27.564554  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:27.579944  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:27.579963  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:27.636369  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:27.629189   22431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:27.629734   22431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:27.631292   22431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:27.631750   22431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:27.633229   22431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:27.629189   22431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:27.629734   22431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:27.631292   22431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:27.631750   22431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:27.633229   22431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:27.636383  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:27.636394  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:30.168117  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:30.179633  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:30.199078  158374 logs.go:282] 0 containers: []
	W1222 22:59:30.199094  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:30.199144  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:30.218504  158374 logs.go:282] 0 containers: []
	W1222 22:59:30.218517  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:30.218559  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:30.237792  158374 logs.go:282] 0 containers: []
	W1222 22:59:30.237810  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:30.237858  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:30.257058  158374 logs.go:282] 0 containers: []
	W1222 22:59:30.257073  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:30.257118  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:30.277405  158374 logs.go:282] 0 containers: []
	W1222 22:59:30.277422  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:30.277475  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:30.297453  158374 logs.go:282] 0 containers: []
	W1222 22:59:30.297467  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:30.297515  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:30.316894  158374 logs.go:282] 0 containers: []
	W1222 22:59:30.316915  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:30.316924  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:30.316936  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:30.346684  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:30.346705  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:30.376362  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:30.376378  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:30.422918  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:30.422940  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:30.438917  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:30.438935  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:30.494621  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:30.487113   22590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:30.487790   22590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:30.489338   22590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:30.489779   22590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:30.491378   22590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:30.487113   22590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:30.487790   22590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:30.489338   22590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:30.489779   22590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:30.491378   22590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:32.995681  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:33.006896  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:33.026274  158374 logs.go:282] 0 containers: []
	W1222 22:59:33.026292  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:33.026336  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:33.045071  158374 logs.go:282] 0 containers: []
	W1222 22:59:33.045087  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:33.045134  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:33.064583  158374 logs.go:282] 0 containers: []
	W1222 22:59:33.064611  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:33.064660  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:33.085351  158374 logs.go:282] 0 containers: []
	W1222 22:59:33.085374  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:33.085431  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:33.103978  158374 logs.go:282] 0 containers: []
	W1222 22:59:33.103991  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:33.104045  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:33.123168  158374 logs.go:282] 0 containers: []
	W1222 22:59:33.123186  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:33.123241  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:33.143080  158374 logs.go:282] 0 containers: []
	W1222 22:59:33.143095  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:33.143105  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:33.143116  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:33.197825  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:33.190806   22712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:33.191305   22712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:33.192895   22712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:33.193360   22712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:33.194895   22712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:33.190806   22712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:33.191305   22712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:33.192895   22712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:33.193360   22712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:33.194895   22712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:33.197836  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:33.197850  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:33.226457  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:33.226476  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:33.257519  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:33.257546  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:33.309950  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:33.309971  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:35.827217  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:35.838617  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:35.858342  158374 logs.go:282] 0 containers: []
	W1222 22:59:35.858358  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:35.858412  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:35.877344  158374 logs.go:282] 0 containers: []
	W1222 22:59:35.877362  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:35.877416  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:35.897833  158374 logs.go:282] 0 containers: []
	W1222 22:59:35.897848  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:35.897902  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:35.916409  158374 logs.go:282] 0 containers: []
	W1222 22:59:35.916428  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:35.916485  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:35.935688  158374 logs.go:282] 0 containers: []
	W1222 22:59:35.935705  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:35.935766  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:35.954858  158374 logs.go:282] 0 containers: []
	W1222 22:59:35.954876  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:35.954924  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:35.973729  158374 logs.go:282] 0 containers: []
	W1222 22:59:35.973746  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:35.973757  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:35.973767  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:36.002045  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:36.002069  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:36.029933  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:36.029949  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:36.075963  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:36.075988  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:36.091711  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:36.091734  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:36.147521  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:36.140402   22889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:36.141008   22889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:36.142529   22889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:36.142962   22889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:36.144445   22889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:36.140402   22889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:36.141008   22889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:36.142529   22889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:36.142962   22889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:36.144445   22889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:38.649172  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:38.660310  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:38.679380  158374 logs.go:282] 0 containers: []
	W1222 22:59:38.679396  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:38.679449  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:38.698305  158374 logs.go:282] 0 containers: []
	W1222 22:59:38.698318  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:38.698365  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:38.717524  158374 logs.go:282] 0 containers: []
	W1222 22:59:38.717541  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:38.717601  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:38.736808  158374 logs.go:282] 0 containers: []
	W1222 22:59:38.736822  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:38.736874  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:38.756003  158374 logs.go:282] 0 containers: []
	W1222 22:59:38.756017  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:38.756061  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:38.774845  158374 logs.go:282] 0 containers: []
	W1222 22:59:38.774858  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:38.774901  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:38.793240  158374 logs.go:282] 0 containers: []
	W1222 22:59:38.793257  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:38.793269  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:38.793281  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:38.821390  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:38.821407  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:38.868649  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:38.868671  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:38.884729  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:38.884749  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:38.940189  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:38.933255   23042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:38.933828   23042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:38.935320   23042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:38.935747   23042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:38.937211   23042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:38.933255   23042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:38.933828   23042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:38.935320   23042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:38.935747   23042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:38.937211   23042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:38.940200  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:38.940211  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:41.470854  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:41.481957  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:41.501032  158374 logs.go:282] 0 containers: []
	W1222 22:59:41.501051  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:41.501102  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:41.522720  158374 logs.go:282] 0 containers: []
	W1222 22:59:41.522740  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:41.522799  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:41.544756  158374 logs.go:282] 0 containers: []
	W1222 22:59:41.544769  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:41.544812  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:41.564773  158374 logs.go:282] 0 containers: []
	W1222 22:59:41.564789  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:41.565312  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:41.586087  158374 logs.go:282] 0 containers: []
	W1222 22:59:41.586104  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:41.586156  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:41.604141  158374 logs.go:282] 0 containers: []
	W1222 22:59:41.604156  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:41.604206  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:41.623828  158374 logs.go:282] 0 containers: []
	W1222 22:59:41.623846  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:41.623858  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:41.623870  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:41.652778  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:41.652798  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:41.680995  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:41.681014  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:41.728777  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:41.728800  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:41.744897  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:41.744916  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:41.800644  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:41.793494   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:41.794046   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:41.795564   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:41.796035   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:41.797584   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:41.793494   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:41.794046   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:41.795564   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:41.796035   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:41.797584   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:44.302472  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:44.313688  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:44.333253  158374 logs.go:282] 0 containers: []
	W1222 22:59:44.333267  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:44.333313  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:44.352778  158374 logs.go:282] 0 containers: []
	W1222 22:59:44.352793  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:44.352851  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:44.372079  158374 logs.go:282] 0 containers: []
	W1222 22:59:44.372093  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:44.372135  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:44.390683  158374 logs.go:282] 0 containers: []
	W1222 22:59:44.390701  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:44.390761  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:44.409168  158374 logs.go:282] 0 containers: []
	W1222 22:59:44.409185  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:44.409259  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:44.426368  158374 logs.go:282] 0 containers: []
	W1222 22:59:44.426381  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:44.426426  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:44.444108  158374 logs.go:282] 0 containers: []
	W1222 22:59:44.444124  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:44.444138  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:44.444148  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:44.481663  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:44.481679  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:44.529101  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:44.529121  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:44.546062  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:44.546081  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:44.600660  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:44.593909   23362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:44.594437   23362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:44.595959   23362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:44.596345   23362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:44.597904   23362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:44.593909   23362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:44.594437   23362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:44.595959   23362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:44.596345   23362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:44.597904   23362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:44.600672  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:44.600684  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:47.129588  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:47.140641  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:47.159435  158374 logs.go:282] 0 containers: []
	W1222 22:59:47.159453  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:47.159498  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:47.178540  158374 logs.go:282] 0 containers: []
	W1222 22:59:47.178560  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:47.178634  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:47.198365  158374 logs.go:282] 0 containers: []
	W1222 22:59:47.198383  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:47.198438  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:47.217411  158374 logs.go:282] 0 containers: []
	W1222 22:59:47.217429  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:47.217479  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:47.236273  158374 logs.go:282] 0 containers: []
	W1222 22:59:47.236287  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:47.236330  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:47.255917  158374 logs.go:282] 0 containers: []
	W1222 22:59:47.255930  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:47.255973  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:47.274750  158374 logs.go:282] 0 containers: []
	W1222 22:59:47.274768  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:47.274779  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:47.274792  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:47.322428  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:47.322452  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:47.339666  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:47.339691  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:47.396552  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:47.389135   23492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:47.389730   23492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:47.391303   23492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:47.391736   23492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:47.393254   23492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:47.389135   23492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:47.389730   23492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:47.391303   23492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:47.391736   23492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:47.393254   23492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:47.396562  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:47.396574  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:47.425768  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:47.425785  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:49.955844  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:49.966834  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:49.985390  158374 logs.go:282] 0 containers: []
	W1222 22:59:49.985405  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:49.985446  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:50.003669  158374 logs.go:282] 0 containers: []
	W1222 22:59:50.003687  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:50.003735  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:50.023188  158374 logs.go:282] 0 containers: []
	W1222 22:59:50.023203  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:50.023254  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:50.042292  158374 logs.go:282] 0 containers: []
	W1222 22:59:50.042309  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:50.042360  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:50.060457  158374 logs.go:282] 0 containers: []
	W1222 22:59:50.060471  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:50.060516  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:50.078548  158374 logs.go:282] 0 containers: []
	W1222 22:59:50.078565  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:50.078666  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:50.096685  158374 logs.go:282] 0 containers: []
	W1222 22:59:50.096704  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:50.096717  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:50.096730  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:50.125658  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:50.125680  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:50.173107  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:50.173124  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:50.188136  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:50.188152  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:50.242225  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:50.235426   23662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:50.235931   23662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:50.237475   23662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:50.237960   23662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:50.239487   23662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:50.235426   23662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:50.235931   23662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:50.237475   23662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:50.237960   23662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:50.239487   23662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:50.242236  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:50.242246  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:52.771712  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:52.783330  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:52.802157  158374 logs.go:282] 0 containers: []
	W1222 22:59:52.802171  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:52.802219  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:52.820709  158374 logs.go:282] 0 containers: []
	W1222 22:59:52.820726  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:52.820777  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:52.839433  158374 logs.go:282] 0 containers: []
	W1222 22:59:52.839448  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:52.839515  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:52.857834  158374 logs.go:282] 0 containers: []
	W1222 22:59:52.857849  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:52.857903  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:52.875916  158374 logs.go:282] 0 containers: []
	W1222 22:59:52.875933  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:52.875977  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:52.893339  158374 logs.go:282] 0 containers: []
	W1222 22:59:52.893351  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:52.893394  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:52.911298  158374 logs.go:282] 0 containers: []
	W1222 22:59:52.911311  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:52.911319  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:52.911329  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:52.942377  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:52.942392  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:52.969572  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:52.969587  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:53.014323  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:53.014339  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:53.029751  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:53.029764  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:53.085527  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:53.078407   23822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:53.078987   23822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:53.080619   23822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:53.081049   23822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:53.082721   23822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:53.078407   23822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:53.078987   23822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:53.080619   23822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:53.081049   23822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:53.082721   23822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:55.587247  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:55.598436  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:55.617688  158374 logs.go:282] 0 containers: []
	W1222 22:59:55.617704  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:55.617764  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:55.637510  158374 logs.go:282] 0 containers: []
	W1222 22:59:55.637528  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:55.637585  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:55.656117  158374 logs.go:282] 0 containers: []
	W1222 22:59:55.656132  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:55.656187  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:55.675258  158374 logs.go:282] 0 containers: []
	W1222 22:59:55.675278  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:55.675327  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:55.694537  158374 logs.go:282] 0 containers: []
	W1222 22:59:55.694555  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:55.694627  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:55.711993  158374 logs.go:282] 0 containers: []
	W1222 22:59:55.712011  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:55.712056  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:55.730198  158374 logs.go:282] 0 containers: []
	W1222 22:59:55.730216  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:55.730228  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:55.730242  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:55.795390  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:55.795416  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:55.811790  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:55.811809  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:55.867201  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:55.859754   23959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:55.860414   23959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:55.862051   23959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:55.862474   23959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:55.863985   23959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:55.859754   23959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:55.860414   23959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:55.862051   23959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:55.862474   23959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:55.863985   23959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:55.867213  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:55.867224  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 22:59:55.898358  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:55.898381  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:58.428962  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 22:59:58.440024  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 22:59:58.459773  158374 logs.go:282] 0 containers: []
	W1222 22:59:58.459787  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 22:59:58.459828  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 22:59:58.478843  158374 logs.go:282] 0 containers: []
	W1222 22:59:58.478863  158374 logs.go:284] No container was found matching "etcd"
	I1222 22:59:58.478920  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 22:59:58.498503  158374 logs.go:282] 0 containers: []
	W1222 22:59:58.498518  158374 logs.go:284] No container was found matching "coredns"
	I1222 22:59:58.498563  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 22:59:58.518032  158374 logs.go:282] 0 containers: []
	W1222 22:59:58.518052  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 22:59:58.518110  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 22:59:58.537315  158374 logs.go:282] 0 containers: []
	W1222 22:59:58.537330  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 22:59:58.537388  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 22:59:58.556299  158374 logs.go:282] 0 containers: []
	W1222 22:59:58.556319  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 22:59:58.556368  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 22:59:58.575345  158374 logs.go:282] 0 containers: []
	W1222 22:59:58.575359  158374 logs.go:284] No container was found matching "kindnet"
	I1222 22:59:58.575369  158374 logs.go:123] Gathering logs for container status ...
	I1222 22:59:58.575378  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 22:59:58.603490  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 22:59:58.603508  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 22:59:58.651589  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 22:59:58.651620  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 22:59:58.667341  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 22:59:58.667358  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 22:59:58.723840  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 22:59:58.716532   24120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:58.717054   24120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:58.718649   24120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:58.719098   24120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:58.720749   24120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 22:59:58.716532   24120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:58.717054   24120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:58.718649   24120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:58.719098   24120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 22:59:58.720749   24120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 22:59:58.723855  158374 logs.go:123] Gathering logs for Docker ...
	I1222 22:59:58.723865  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:01.257052  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:01.268153  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:01.287939  158374 logs.go:282] 0 containers: []
	W1222 23:00:01.287954  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:01.288001  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:01.306844  158374 logs.go:282] 0 containers: []
	W1222 23:00:01.306857  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:01.306904  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:01.326511  158374 logs.go:282] 0 containers: []
	W1222 23:00:01.326530  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:01.326579  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:01.345734  158374 logs.go:282] 0 containers: []
	W1222 23:00:01.345748  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:01.345793  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:01.364619  158374 logs.go:282] 0 containers: []
	W1222 23:00:01.364634  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:01.364682  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:01.383578  158374 logs.go:282] 0 containers: []
	W1222 23:00:01.383605  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:01.383654  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:01.401753  158374 logs.go:282] 0 containers: []
	W1222 23:00:01.401770  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:01.401781  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:01.401795  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:01.457583  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:01.450373   24259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:01.450906   24259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:01.452493   24259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:01.452946   24259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:01.454493   24259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:01.450373   24259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:01.450906   24259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:01.452493   24259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:01.452946   24259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:01.454493   24259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:01.457611  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:01.457625  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:01.486870  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:01.486891  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:01.514587  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:01.514619  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:01.561028  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:01.561052  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:04.078615  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:04.089843  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:04.109432  158374 logs.go:282] 0 containers: []
	W1222 23:00:04.109450  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:04.109498  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:04.128585  158374 logs.go:282] 0 containers: []
	W1222 23:00:04.128630  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:04.128680  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:04.147830  158374 logs.go:282] 0 containers: []
	W1222 23:00:04.147846  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:04.147901  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:04.166672  158374 logs.go:282] 0 containers: []
	W1222 23:00:04.166686  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:04.166730  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:04.185500  158374 logs.go:282] 0 containers: []
	W1222 23:00:04.185523  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:04.185574  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:04.204345  158374 logs.go:282] 0 containers: []
	W1222 23:00:04.204360  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:04.204404  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:04.222488  158374 logs.go:282] 0 containers: []
	W1222 23:00:04.222503  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:04.222513  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:04.222523  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:04.252225  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:04.252244  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:04.280489  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:04.280507  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:04.329635  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:04.329657  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:04.345631  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:04.345650  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:04.400851  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:04.393656   24445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:04.394230   24445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:04.395887   24445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:04.396281   24445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:04.397838   24445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:04.393656   24445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:04.394230   24445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:04.395887   24445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:04.396281   24445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:04.397838   24445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:06.901498  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:06.913084  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:06.932724  158374 logs.go:282] 0 containers: []
	W1222 23:00:06.932739  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:06.932793  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:06.951127  158374 logs.go:282] 0 containers: []
	W1222 23:00:06.951146  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:06.951187  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:06.969488  158374 logs.go:282] 0 containers: []
	W1222 23:00:06.969501  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:06.969543  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:06.987763  158374 logs.go:282] 0 containers: []
	W1222 23:00:06.987780  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:06.987824  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:07.005884  158374 logs.go:282] 0 containers: []
	W1222 23:00:07.005900  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:07.005951  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:07.026370  158374 logs.go:282] 0 containers: []
	W1222 23:00:07.026397  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:07.026449  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:07.047472  158374 logs.go:282] 0 containers: []
	W1222 23:00:07.047486  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:07.047496  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:07.047505  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:07.092662  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:07.092679  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:07.107657  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:07.107672  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:07.162182  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:07.155104   24590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:07.155706   24590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:07.157237   24590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:07.157737   24590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:07.159269   24590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:07.155104   24590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:07.155706   24590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:07.157237   24590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:07.157737   24590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:07.159269   24590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:07.162193  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:07.162203  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:07.190466  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:07.190482  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:09.719767  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:09.730961  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:09.750004  158374 logs.go:282] 0 containers: []
	W1222 23:00:09.750021  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:09.750061  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:09.768191  158374 logs.go:282] 0 containers: []
	W1222 23:00:09.768203  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:09.768240  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:09.785655  158374 logs.go:282] 0 containers: []
	W1222 23:00:09.785668  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:09.785715  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:09.803931  158374 logs.go:282] 0 containers: []
	W1222 23:00:09.803946  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:09.803987  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:09.823040  158374 logs.go:282] 0 containers: []
	W1222 23:00:09.823058  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:09.823105  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:09.841359  158374 logs.go:282] 0 containers: []
	W1222 23:00:09.841373  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:09.841413  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:09.859786  158374 logs.go:282] 0 containers: []
	W1222 23:00:09.859799  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:09.859812  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:09.859824  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:09.905428  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:09.905445  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:09.920496  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:09.920511  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:09.974948  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:09.968124   24736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:09.968667   24736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:09.970182   24736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:09.970583   24736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:09.972119   24736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:09.968124   24736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:09.968667   24736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:09.970182   24736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:09.970583   24736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:09.972119   24736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:09.974969  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:09.974982  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:10.003466  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:10.003485  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:12.535644  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:12.546867  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:12.565761  158374 logs.go:282] 0 containers: []
	W1222 23:00:12.565778  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:12.565825  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:12.584431  158374 logs.go:282] 0 containers: []
	W1222 23:00:12.584446  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:12.584504  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:12.602950  158374 logs.go:282] 0 containers: []
	W1222 23:00:12.602966  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:12.603009  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:12.621210  158374 logs.go:282] 0 containers: []
	W1222 23:00:12.621224  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:12.621268  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:12.639377  158374 logs.go:282] 0 containers: []
	W1222 23:00:12.639393  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:12.639444  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:12.657924  158374 logs.go:282] 0 containers: []
	W1222 23:00:12.657941  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:12.657984  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:12.676311  158374 logs.go:282] 0 containers: []
	W1222 23:00:12.676326  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:12.676336  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:12.676346  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:12.703500  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:12.703515  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:12.750933  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:12.750951  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:12.766856  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:12.766870  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:12.822138  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:12.815289   24904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:12.815808   24904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:12.817339   24904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:12.817801   24904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:12.819283   24904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:12.815289   24904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:12.815808   24904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:12.817339   24904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:12.817801   24904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:12.819283   24904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:12.822170  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:12.822269  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:15.355685  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:15.366722  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:15.385319  158374 logs.go:282] 0 containers: []
	W1222 23:00:15.385334  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:15.385401  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:15.402653  158374 logs.go:282] 0 containers: []
	W1222 23:00:15.402666  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:15.402712  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:15.420695  158374 logs.go:282] 0 containers: []
	W1222 23:00:15.420709  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:15.420757  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:15.438422  158374 logs.go:282] 0 containers: []
	W1222 23:00:15.438438  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:15.438488  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:15.457961  158374 logs.go:282] 0 containers: []
	W1222 23:00:15.457978  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:15.458023  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:15.477016  158374 logs.go:282] 0 containers: []
	W1222 23:00:15.477031  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:15.477075  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:15.495320  158374 logs.go:282] 0 containers: []
	W1222 23:00:15.495335  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:15.495346  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:15.495363  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:15.542697  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:15.542716  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:15.557986  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:15.558002  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:15.613071  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:15.605742   25048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:15.606273   25048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:15.607863   25048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:15.608322   25048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:15.609898   25048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:15.605742   25048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:15.606273   25048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:15.607863   25048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:15.608322   25048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:15.609898   25048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:15.613082  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:15.613093  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:15.643893  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:15.643912  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:18.176478  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:18.187435  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:18.206820  158374 logs.go:282] 0 containers: []
	W1222 23:00:18.206836  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:18.206885  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:18.225162  158374 logs.go:282] 0 containers: []
	W1222 23:00:18.225179  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:18.225242  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:18.244089  158374 logs.go:282] 0 containers: []
	W1222 23:00:18.244106  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:18.244149  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:18.263582  158374 logs.go:282] 0 containers: []
	W1222 23:00:18.263618  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:18.263678  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:18.285421  158374 logs.go:282] 0 containers: []
	W1222 23:00:18.285439  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:18.285483  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:18.304575  158374 logs.go:282] 0 containers: []
	W1222 23:00:18.304616  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:18.304679  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:18.322814  158374 logs.go:282] 0 containers: []
	W1222 23:00:18.322831  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:18.322842  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:18.322853  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:18.367678  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:18.367695  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:18.384038  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:18.384060  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:18.439158  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:18.432401   25210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:18.432947   25210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:18.434455   25210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:18.434840   25210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:18.436266   25210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:18.432401   25210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:18.432947   25210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:18.434455   25210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:18.434840   25210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:18.436266   25210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:18.439172  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:18.439186  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:18.468274  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:18.468290  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:20.996786  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:21.007676  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:21.026577  158374 logs.go:282] 0 containers: []
	W1222 23:00:21.026589  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:21.026662  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:21.045179  158374 logs.go:282] 0 containers: []
	W1222 23:00:21.045195  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:21.045237  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:21.064216  158374 logs.go:282] 0 containers: []
	W1222 23:00:21.064230  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:21.064278  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:21.082929  158374 logs.go:282] 0 containers: []
	W1222 23:00:21.082946  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:21.082991  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:21.101298  158374 logs.go:282] 0 containers: []
	W1222 23:00:21.101314  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:21.101372  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:21.119708  158374 logs.go:282] 0 containers: []
	W1222 23:00:21.119719  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:21.119759  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:21.137828  158374 logs.go:282] 0 containers: []
	W1222 23:00:21.137841  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:21.137849  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:21.137859  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:21.167198  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:21.167214  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:21.194956  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:21.194974  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:21.243666  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:21.243687  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:21.259092  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:21.259108  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:21.316128  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:21.309163   25383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:21.309711   25383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:21.311266   25383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:21.311721   25383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:21.313183   25383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:21.309163   25383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:21.309711   25383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:21.311266   25383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:21.311721   25383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:21.313183   25383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:23.817830  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:23.829010  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:23.847819  158374 logs.go:282] 0 containers: []
	W1222 23:00:23.847833  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:23.847883  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:23.866626  158374 logs.go:282] 0 containers: []
	W1222 23:00:23.866640  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:23.866685  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:23.884038  158374 logs.go:282] 0 containers: []
	W1222 23:00:23.884053  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:23.884099  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:23.903021  158374 logs.go:282] 0 containers: []
	W1222 23:00:23.903037  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:23.903091  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:23.921758  158374 logs.go:282] 0 containers: []
	W1222 23:00:23.921771  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:23.921817  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:23.940118  158374 logs.go:282] 0 containers: []
	W1222 23:00:23.940135  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:23.940176  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:23.958805  158374 logs.go:282] 0 containers: []
	W1222 23:00:23.958817  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:23.958826  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:23.958836  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:24.006524  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:24.006542  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:24.021579  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:24.021602  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:24.077965  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:24.070852   25512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:24.071400   25512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:24.072995   25512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:24.073395   25512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:24.074970   25512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:24.070852   25512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:24.071400   25512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:24.072995   25512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:24.073395   25512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:24.074970   25512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:24.077976  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:24.077986  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:24.107448  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:24.107464  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:26.635419  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:26.646546  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:26.665787  158374 logs.go:282] 0 containers: []
	W1222 23:00:26.665805  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:26.665856  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:26.683869  158374 logs.go:282] 0 containers: []
	W1222 23:00:26.683885  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:26.683930  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:26.702549  158374 logs.go:282] 0 containers: []
	W1222 23:00:26.702565  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:26.702628  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:26.720884  158374 logs.go:282] 0 containers: []
	W1222 23:00:26.720901  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:26.720947  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:26.739437  158374 logs.go:282] 0 containers: []
	W1222 23:00:26.739453  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:26.739498  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:26.757871  158374 logs.go:282] 0 containers: []
	W1222 23:00:26.757885  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:26.757927  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:26.775863  158374 logs.go:282] 0 containers: []
	W1222 23:00:26.775882  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:26.775893  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:26.775902  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:26.821886  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:26.821903  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:26.837204  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:26.837220  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:26.891970  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:26.884764   25667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:26.885231   25667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:26.886895   25667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:26.887281   25667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:26.888844   25667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:26.884764   25667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:26.885231   25667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:26.886895   25667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:26.887281   25667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:26.888844   25667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:26.891981  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:26.891991  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:26.922932  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:26.922949  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:29.452400  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:29.463551  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:29.482265  158374 logs.go:282] 0 containers: []
	W1222 23:00:29.482278  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:29.482326  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:29.501689  158374 logs.go:282] 0 containers: []
	W1222 23:00:29.501707  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:29.501762  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:29.522730  158374 logs.go:282] 0 containers: []
	W1222 23:00:29.522747  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:29.522799  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:29.542657  158374 logs.go:282] 0 containers: []
	W1222 23:00:29.542671  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:29.542720  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:29.560883  158374 logs.go:282] 0 containers: []
	W1222 23:00:29.560897  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:29.560938  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:29.579281  158374 logs.go:282] 0 containers: []
	W1222 23:00:29.579297  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:29.579340  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:29.597740  158374 logs.go:282] 0 containers: []
	W1222 23:00:29.597755  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:29.597766  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:29.597777  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:29.627231  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:29.627248  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:29.655168  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:29.655183  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:29.703330  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:29.703348  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:29.718800  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:29.718821  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:29.773515  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:29.766735   25841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:29.767220   25841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:29.768799   25841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:29.769197   25841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:29.770715   25841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:29.766735   25841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:29.767220   25841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:29.768799   25841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:29.769197   25841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:29.770715   25841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:32.274411  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:32.285356  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:32.304409  158374 logs.go:282] 0 containers: []
	W1222 23:00:32.304423  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:32.304465  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:32.324167  158374 logs.go:282] 0 containers: []
	W1222 23:00:32.324183  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:32.324228  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:32.342878  158374 logs.go:282] 0 containers: []
	W1222 23:00:32.342893  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:32.342950  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:32.362212  158374 logs.go:282] 0 containers: []
	W1222 23:00:32.362226  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:32.362268  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:32.381154  158374 logs.go:282] 0 containers: []
	W1222 23:00:32.381171  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:32.381229  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:32.400512  158374 logs.go:282] 0 containers: []
	W1222 23:00:32.400533  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:32.400587  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:32.419083  158374 logs.go:282] 0 containers: []
	W1222 23:00:32.419097  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:32.419112  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:32.419121  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:32.466805  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:32.466824  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:32.482931  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:32.482947  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:32.543407  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:32.536205   25971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:32.536750   25971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:32.538320   25971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:32.538796   25971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:32.540418   25971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:32.536205   25971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:32.536750   25971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:32.538320   25971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:32.538796   25971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:32.540418   25971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:32.543419  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:32.543436  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:32.572975  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:32.572990  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:35.102503  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:35.113391  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:35.132097  158374 logs.go:282] 0 containers: []
	W1222 23:00:35.132109  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:35.132151  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:35.150359  158374 logs.go:282] 0 containers: []
	W1222 23:00:35.150378  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:35.150441  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:35.169070  158374 logs.go:282] 0 containers: []
	W1222 23:00:35.169088  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:35.169141  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:35.187626  158374 logs.go:282] 0 containers: []
	W1222 23:00:35.187641  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:35.187686  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:35.205837  158374 logs.go:282] 0 containers: []
	W1222 23:00:35.205854  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:35.205895  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:35.224198  158374 logs.go:282] 0 containers: []
	W1222 23:00:35.224213  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:35.224255  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:35.241750  158374 logs.go:282] 0 containers: []
	W1222 23:00:35.241765  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:35.241774  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:35.241783  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:35.286130  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:35.286145  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:35.301096  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:35.301111  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:35.356973  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:35.349818   26124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:35.350427   26124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:35.351993   26124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:35.352503   26124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:35.353993   26124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:35.349818   26124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:35.350427   26124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:35.351993   26124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:35.352503   26124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:35.353993   26124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:35.356986  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:35.356997  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:35.385504  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:35.385523  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:37.914534  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:37.925443  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:37.943928  158374 logs.go:282] 0 containers: []
	W1222 23:00:37.943942  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:37.943990  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:37.961408  158374 logs.go:282] 0 containers: []
	W1222 23:00:37.961424  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:37.961481  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:37.979364  158374 logs.go:282] 0 containers: []
	W1222 23:00:37.979380  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:37.979437  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:37.997737  158374 logs.go:282] 0 containers: []
	W1222 23:00:37.997751  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:37.997796  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:38.016341  158374 logs.go:282] 0 containers: []
	W1222 23:00:38.016358  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:38.016425  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:38.035203  158374 logs.go:282] 0 containers: []
	W1222 23:00:38.035221  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:38.035270  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:38.053655  158374 logs.go:282] 0 containers: []
	W1222 23:00:38.053672  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:38.053684  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:38.053699  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:38.100003  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:38.100022  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:38.116642  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:38.116661  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:38.170897  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:38.164046   26282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:38.164628   26282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:38.166176   26282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:38.166562   26282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:38.168042   26282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:38.164046   26282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:38.164628   26282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:38.166176   26282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:38.166562   26282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:38.168042   26282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:38.170907  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:38.170921  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:38.200254  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:38.200273  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:40.729556  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:40.740911  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:40.762071  158374 logs.go:282] 0 containers: []
	W1222 23:00:40.762085  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:40.762131  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:40.782191  158374 logs.go:282] 0 containers: []
	W1222 23:00:40.782207  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:40.782259  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:40.802303  158374 logs.go:282] 0 containers: []
	W1222 23:00:40.802318  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:40.802365  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:40.821101  158374 logs.go:282] 0 containers: []
	W1222 23:00:40.821115  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:40.821159  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:40.839813  158374 logs.go:282] 0 containers: []
	W1222 23:00:40.839830  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:40.839880  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:40.859473  158374 logs.go:282] 0 containers: []
	W1222 23:00:40.859490  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:40.859546  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:40.877060  158374 logs.go:282] 0 containers: []
	W1222 23:00:40.877076  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:40.877088  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:40.877101  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:40.922835  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:40.922852  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:40.938346  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:40.938361  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:40.993518  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:40.986693   26438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:40.987159   26438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:40.988687   26438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:40.989134   26438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:40.990683   26438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:40.986693   26438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:40.987159   26438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:40.988687   26438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:40.989134   26438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:40.990683   26438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:40.993531  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:40.993542  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:41.023093  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:41.023109  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:43.552889  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:43.564108  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:43.582897  158374 logs.go:282] 0 containers: []
	W1222 23:00:43.582914  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:43.582969  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:43.601736  158374 logs.go:282] 0 containers: []
	W1222 23:00:43.601750  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:43.601793  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:43.620069  158374 logs.go:282] 0 containers: []
	W1222 23:00:43.620083  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:43.620126  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:43.638251  158374 logs.go:282] 0 containers: []
	W1222 23:00:43.638269  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:43.638335  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:43.656816  158374 logs.go:282] 0 containers: []
	W1222 23:00:43.656829  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:43.656878  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:43.675289  158374 logs.go:282] 0 containers: []
	W1222 23:00:43.675302  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:43.675354  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:43.693789  158374 logs.go:282] 0 containers: []
	W1222 23:00:43.693805  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:43.693817  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:43.693828  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:43.749062  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:43.742036   26577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:43.742643   26577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:43.744184   26577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:43.744673   26577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:43.746198   26577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:43.742036   26577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:43.742643   26577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:43.744184   26577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:43.744673   26577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:43.746198   26577 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:43.749078  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:43.749091  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:43.779321  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:43.779346  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:43.808611  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:43.808632  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:43.853216  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:43.853235  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:46.369073  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:46.380061  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:46.398781  158374 logs.go:282] 0 containers: []
	W1222 23:00:46.398797  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:46.398851  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:46.416817  158374 logs.go:282] 0 containers: []
	W1222 23:00:46.416834  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:46.416877  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:46.434863  158374 logs.go:282] 0 containers: []
	W1222 23:00:46.434877  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:46.434923  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:46.453147  158374 logs.go:282] 0 containers: []
	W1222 23:00:46.453164  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:46.453208  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:46.471210  158374 logs.go:282] 0 containers: []
	W1222 23:00:46.471224  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:46.471272  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:46.489455  158374 logs.go:282] 0 containers: []
	W1222 23:00:46.489468  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:46.489517  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:46.508022  158374 logs.go:282] 0 containers: []
	W1222 23:00:46.508039  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:46.508050  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:46.508061  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:46.555488  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:46.555505  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:46.571399  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:46.571415  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:46.626809  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:46.620219   26740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:46.620751   26740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:46.622257   26740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:46.622660   26740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:46.624126   26740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:46.620219   26740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:46.620751   26740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:46.622257   26740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:46.622660   26740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:46.624126   26740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:46.626822  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:46.626834  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:46.656631  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:46.656648  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:49.197674  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:49.208617  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:49.228203  158374 logs.go:282] 0 containers: []
	W1222 23:00:49.228221  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:49.228267  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:49.246623  158374 logs.go:282] 0 containers: []
	W1222 23:00:49.246638  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:49.246676  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:49.264794  158374 logs.go:282] 0 containers: []
	W1222 23:00:49.264810  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:49.264861  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:49.283414  158374 logs.go:282] 0 containers: []
	W1222 23:00:49.283431  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:49.283480  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:49.301735  158374 logs.go:282] 0 containers: []
	W1222 23:00:49.301748  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:49.301787  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:49.320079  158374 logs.go:282] 0 containers: []
	W1222 23:00:49.320092  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:49.320134  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:49.339280  158374 logs.go:282] 0 containers: []
	W1222 23:00:49.339296  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:49.339308  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:49.339324  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:49.354937  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:49.354953  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:49.409733  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:49.402775   26901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:49.403250   26901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:49.404843   26901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:49.405306   26901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:49.406844   26901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:49.402775   26901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:49.403250   26901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:49.404843   26901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:49.405306   26901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:49.406844   26901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:49.409744  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:49.409753  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:49.438748  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:49.438765  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:49.466948  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:49.466964  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:52.015822  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:52.027799  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:52.047753  158374 logs.go:282] 0 containers: []
	W1222 23:00:52.047770  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:52.047825  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:52.067486  158374 logs.go:282] 0 containers: []
	W1222 23:00:52.067502  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:52.067557  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:52.086094  158374 logs.go:282] 0 containers: []
	W1222 23:00:52.086110  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:52.086158  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:52.105497  158374 logs.go:282] 0 containers: []
	W1222 23:00:52.105513  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:52.105560  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:52.123560  158374 logs.go:282] 0 containers: []
	W1222 23:00:52.123575  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:52.123641  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:52.141887  158374 logs.go:282] 0 containers: []
	W1222 23:00:52.141905  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:52.141957  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:52.160464  158374 logs.go:282] 0 containers: []
	W1222 23:00:52.160480  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:52.160491  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:52.160500  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:52.207605  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:52.207623  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:52.222700  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:52.222714  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:52.277875  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:52.270341   27059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:52.270915   27059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:52.272985   27059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:52.273548   27059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:52.275027   27059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:52.270341   27059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:52.270915   27059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:52.272985   27059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:52.273548   27059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:52.275027   27059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:52.277887  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:52.277899  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:52.307146  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:52.307163  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:54.836681  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:54.847631  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:54.866945  158374 logs.go:282] 0 containers: []
	W1222 23:00:54.866961  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:54.867018  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:54.885478  158374 logs.go:282] 0 containers: []
	W1222 23:00:54.885491  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:54.885540  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:54.904643  158374 logs.go:282] 0 containers: []
	W1222 23:00:54.904657  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:54.904701  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:54.923539  158374 logs.go:282] 0 containers: []
	W1222 23:00:54.923554  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:54.923615  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:54.941322  158374 logs.go:282] 0 containers: []
	W1222 23:00:54.941338  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:54.941399  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:54.961766  158374 logs.go:282] 0 containers: []
	W1222 23:00:54.961785  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:54.961839  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:54.981417  158374 logs.go:282] 0 containers: []
	W1222 23:00:54.981432  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:54.981442  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:54.981452  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:55.012286  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:55.012306  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:00:55.043682  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:55.043703  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:55.091444  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:55.091466  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:55.107015  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:55.107039  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:55.162701  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:55.155754   27231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:55.156265   27231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:55.157880   27231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:55.158349   27231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:55.159815   27231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:55.155754   27231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:55.156265   27231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:55.157880   27231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:55.158349   27231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:55.159815   27231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:57.664308  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:00:57.675489  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:00:57.692858  158374 logs.go:282] 0 containers: []
	W1222 23:00:57.692875  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:00:57.692935  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:00:57.712522  158374 logs.go:282] 0 containers: []
	W1222 23:00:57.712538  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:00:57.712607  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:00:57.732210  158374 logs.go:282] 0 containers: []
	W1222 23:00:57.732226  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:00:57.732269  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:00:57.751532  158374 logs.go:282] 0 containers: []
	W1222 23:00:57.751545  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:00:57.751602  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:00:57.772243  158374 logs.go:282] 0 containers: []
	W1222 23:00:57.772257  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:00:57.772301  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:00:57.791227  158374 logs.go:282] 0 containers: []
	W1222 23:00:57.791243  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:00:57.791304  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:00:57.810525  158374 logs.go:282] 0 containers: []
	W1222 23:00:57.810543  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:00:57.810552  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:00:57.810561  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:00:57.858495  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:00:57.858513  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:00:57.873762  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:00:57.873777  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:00:57.929650  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:00:57.922350   27355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:57.922945   27355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:57.924490   27355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:57.924968   27355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:57.926535   27355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:00:57.922350   27355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:57.922945   27355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:57.924490   27355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:57.924968   27355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:00:57.926535   27355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:00:57.929662  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:00:57.929672  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:00:57.960293  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:00:57.960310  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:00.491408  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:00.502843  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:00.522074  158374 logs.go:282] 0 containers: []
	W1222 23:01:00.522090  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:00.522138  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:00.540871  158374 logs.go:282] 0 containers: []
	W1222 23:01:00.540888  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:00.540945  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:00.558913  158374 logs.go:282] 0 containers: []
	W1222 23:01:00.558931  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:00.558975  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:00.577980  158374 logs.go:282] 0 containers: []
	W1222 23:01:00.577997  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:00.578050  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:00.597037  158374 logs.go:282] 0 containers: []
	W1222 23:01:00.597056  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:00.597104  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:00.615867  158374 logs.go:282] 0 containers: []
	W1222 23:01:00.615881  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:00.615924  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:00.634566  158374 logs.go:282] 0 containers: []
	W1222 23:01:00.634581  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:00.634609  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:00.634623  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:00.663403  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:00.663425  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:00.712341  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:00.712364  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:00.729099  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:00.729121  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:00.785283  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:00.778009   27527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:00.778541   27527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:00.780082   27527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:00.780546   27527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:00.782086   27527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:00.778009   27527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:00.778541   27527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:00.780082   27527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:00.780546   27527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:00.782086   27527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:00.785294  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:00.785307  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:03.318166  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:03.329041  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:03.347864  158374 logs.go:282] 0 containers: []
	W1222 23:01:03.347878  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:03.347940  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:03.366921  158374 logs.go:282] 0 containers: []
	W1222 23:01:03.366937  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:03.366991  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:03.384054  158374 logs.go:282] 0 containers: []
	W1222 23:01:03.384070  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:03.384117  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:03.403365  158374 logs.go:282] 0 containers: []
	W1222 23:01:03.403380  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:03.403432  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:03.422487  158374 logs.go:282] 0 containers: []
	W1222 23:01:03.422501  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:03.422556  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:03.440736  158374 logs.go:282] 0 containers: []
	W1222 23:01:03.440751  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:03.440805  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:03.459891  158374 logs.go:282] 0 containers: []
	W1222 23:01:03.459906  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:03.459915  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:03.459926  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:03.508624  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:03.508646  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:03.525926  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:03.525946  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:03.581788  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:03.574395   27664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:03.574955   27664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:03.576542   27664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:03.577046   27664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:03.578534   27664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:03.574395   27664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:03.574955   27664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:03.576542   27664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:03.577046   27664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:03.578534   27664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:03.581798  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:03.581809  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:03.610547  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:03.610564  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:06.140960  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:06.152054  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:06.171277  158374 logs.go:282] 0 containers: []
	W1222 23:01:06.171290  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:06.171346  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:06.190005  158374 logs.go:282] 0 containers: []
	W1222 23:01:06.190024  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:06.190073  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:06.209033  158374 logs.go:282] 0 containers: []
	W1222 23:01:06.209052  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:06.209119  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:06.228351  158374 logs.go:282] 0 containers: []
	W1222 23:01:06.228368  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:06.228424  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:06.248668  158374 logs.go:282] 0 containers: []
	W1222 23:01:06.248682  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:06.248737  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:06.269569  158374 logs.go:282] 0 containers: []
	W1222 23:01:06.269587  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:06.269662  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:06.291826  158374 logs.go:282] 0 containers: []
	W1222 23:01:06.291841  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:06.291851  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:06.291861  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:06.337818  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:06.337839  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:06.353395  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:06.353413  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:06.410648  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:06.403406   27819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:06.403979   27819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:06.405629   27819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:06.406063   27819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:06.407632   27819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:06.403406   27819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:06.403979   27819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:06.405629   27819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:06.406063   27819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:06.407632   27819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:06.410666  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:06.410681  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:06.440427  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:06.440447  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:08.971014  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:08.982172  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:09.001296  158374 logs.go:282] 0 containers: []
	W1222 23:01:09.001315  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:09.001377  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:09.021010  158374 logs.go:282] 0 containers: []
	W1222 23:01:09.021025  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:09.021065  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:09.039299  158374 logs.go:282] 0 containers: []
	W1222 23:01:09.039315  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:09.039361  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:09.059055  158374 logs.go:282] 0 containers: []
	W1222 23:01:09.059069  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:09.059119  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:09.079065  158374 logs.go:282] 0 containers: []
	W1222 23:01:09.079080  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:09.079123  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:09.098129  158374 logs.go:282] 0 containers: []
	W1222 23:01:09.098148  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:09.098194  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:09.116949  158374 logs.go:282] 0 containers: []
	W1222 23:01:09.116965  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:09.116974  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:09.116984  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:09.172761  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:09.165842   27961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:09.166366   27961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:09.167907   27961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:09.168380   27961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:09.169881   27961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:09.165842   27961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:09.166366   27961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:09.167907   27961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:09.168380   27961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:09.169881   27961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:09.172771  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:09.172782  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:09.203094  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:09.203115  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:09.231372  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:09.231389  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:09.279710  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:09.279730  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:11.797972  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:11.809159  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:11.828406  158374 logs.go:282] 0 containers: []
	W1222 23:01:11.828423  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:11.828474  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:11.847173  158374 logs.go:282] 0 containers: []
	W1222 23:01:11.847194  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:11.847248  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:11.866123  158374 logs.go:282] 0 containers: []
	W1222 23:01:11.866141  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:11.866191  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:11.885374  158374 logs.go:282] 0 containers: []
	W1222 23:01:11.885388  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:11.885428  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:11.904372  158374 logs.go:282] 0 containers: []
	W1222 23:01:11.904386  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:11.904429  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:11.923420  158374 logs.go:282] 0 containers: []
	W1222 23:01:11.923437  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:11.923496  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:11.942294  158374 logs.go:282] 0 containers: []
	W1222 23:01:11.942332  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:11.942344  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:11.942356  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:11.999449  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:11.992239   28121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:11.992816   28121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:11.994388   28121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:11.994771   28121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:11.996307   28121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:11.992239   28121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:11.992816   28121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:11.994388   28121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:11.994771   28121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:11.996307   28121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:11.999465  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:11.999478  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:12.029498  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:12.029524  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:12.058708  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:12.058726  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:12.106818  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:12.106837  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:14.624265  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:14.635338  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:14.654466  158374 logs.go:282] 0 containers: []
	W1222 23:01:14.654481  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:14.654523  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:14.674801  158374 logs.go:282] 0 containers: []
	W1222 23:01:14.674817  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:14.674860  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:14.694258  158374 logs.go:282] 0 containers: []
	W1222 23:01:14.694275  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:14.694322  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:14.714916  158374 logs.go:282] 0 containers: []
	W1222 23:01:14.714932  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:14.714980  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:14.734141  158374 logs.go:282] 0 containers: []
	W1222 23:01:14.734155  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:14.734198  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:14.754093  158374 logs.go:282] 0 containers: []
	W1222 23:01:14.754108  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:14.754162  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:14.773451  158374 logs.go:282] 0 containers: []
	W1222 23:01:14.773468  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:14.773481  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:14.773496  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:14.830750  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:14.823428   28278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:14.824043   28278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:14.825689   28278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:14.826129   28278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:14.827703   28278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:14.823428   28278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:14.824043   28278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:14.825689   28278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:14.826129   28278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:14.827703   28278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:14.830760  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:14.830770  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:14.859787  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:14.859804  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:14.888611  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:14.888631  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:14.936097  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:14.936118  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:17.453191  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:17.464732  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:17.485010  158374 logs.go:282] 0 containers: []
	W1222 23:01:17.485025  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:17.485072  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:17.505952  158374 logs.go:282] 0 containers: []
	W1222 23:01:17.505969  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:17.506027  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:17.528761  158374 logs.go:282] 0 containers: []
	W1222 23:01:17.528776  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:17.528821  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:17.549296  158374 logs.go:282] 0 containers: []
	W1222 23:01:17.549312  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:17.549376  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:17.568100  158374 logs.go:282] 0 containers: []
	W1222 23:01:17.568117  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:17.568167  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:17.587017  158374 logs.go:282] 0 containers: []
	W1222 23:01:17.587034  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:17.587086  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:17.606020  158374 logs.go:282] 0 containers: []
	W1222 23:01:17.606036  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:17.606045  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:17.606061  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:17.621414  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:17.621430  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:17.677909  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:17.670771   28438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:17.671262   28438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:17.672895   28438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:17.673340   28438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:17.674869   28438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:17.670771   28438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:17.671262   28438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:17.672895   28438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:17.673340   28438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:17.674869   28438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:17.677925  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:17.677935  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:17.708117  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:17.708138  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:17.739554  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:17.739574  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:20.288948  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:20.299810  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:20.318929  158374 logs.go:282] 0 containers: []
	W1222 23:01:20.318945  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:20.319006  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:20.337556  158374 logs.go:282] 0 containers: []
	W1222 23:01:20.337573  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:20.337641  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:20.355705  158374 logs.go:282] 0 containers: []
	W1222 23:01:20.355718  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:20.355760  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:20.373672  158374 logs.go:282] 0 containers: []
	W1222 23:01:20.373686  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:20.373726  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:20.392616  158374 logs.go:282] 0 containers: []
	W1222 23:01:20.392631  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:20.392674  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:20.411253  158374 logs.go:282] 0 containers: []
	W1222 23:01:20.411270  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:20.411322  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:20.429537  158374 logs.go:282] 0 containers: []
	W1222 23:01:20.429552  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:20.429563  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:20.429575  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:20.445080  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:20.445098  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:20.501506  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:20.494080   28582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:20.494697   28582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:20.496376   28582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:20.496836   28582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:20.498365   28582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:20.494080   28582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:20.494697   28582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:20.496376   28582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:20.496836   28582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:20.498365   28582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:20.501520  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:20.501535  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:20.531907  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:20.531925  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:20.560530  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:20.560547  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:23.108846  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:23.119974  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:23.138613  158374 logs.go:282] 0 containers: []
	W1222 23:01:23.138631  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:23.138686  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:23.156921  158374 logs.go:282] 0 containers: []
	W1222 23:01:23.156935  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:23.156975  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:23.175153  158374 logs.go:282] 0 containers: []
	W1222 23:01:23.175166  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:23.175209  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:23.193263  158374 logs.go:282] 0 containers: []
	W1222 23:01:23.193295  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:23.193356  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:23.212207  158374 logs.go:282] 0 containers: []
	W1222 23:01:23.212221  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:23.212263  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:23.230929  158374 logs.go:282] 0 containers: []
	W1222 23:01:23.230945  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:23.231005  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:23.249646  158374 logs.go:282] 0 containers: []
	W1222 23:01:23.249659  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:23.249669  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:23.249679  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:23.277729  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:23.277745  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:23.324063  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:23.324082  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:23.339406  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:23.339425  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:23.394875  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:23.387312   28761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:23.387859   28761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:23.389847   28761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:23.390576   28761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:23.392042   28761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:23.387312   28761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:23.387859   28761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:23.389847   28761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:23.390576   28761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:23.392042   28761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:23.394885  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:23.394895  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:25.926075  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:25.937168  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:25.956016  158374 logs.go:282] 0 containers: []
	W1222 23:01:25.956028  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:25.956074  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:25.974143  158374 logs.go:282] 0 containers: []
	W1222 23:01:25.974159  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:25.974202  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:25.992434  158374 logs.go:282] 0 containers: []
	W1222 23:01:25.992449  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:25.992505  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:26.010399  158374 logs.go:282] 0 containers: []
	W1222 23:01:26.010415  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:26.010474  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:26.029958  158374 logs.go:282] 0 containers: []
	W1222 23:01:26.029974  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:26.030039  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:26.048962  158374 logs.go:282] 0 containers: []
	W1222 23:01:26.048977  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:26.049032  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:26.067563  158374 logs.go:282] 0 containers: []
	W1222 23:01:26.067581  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:26.067605  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:26.067627  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:26.115800  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:26.115818  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:26.131143  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:26.131160  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:26.187038  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:26.179580   28901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:26.180759   28901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:26.181222   28901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:26.182757   28901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:26.183141   28901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:26.179580   28901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:26.180759   28901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:26.181222   28901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:26.182757   28901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:26.183141   28901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:26.187049  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:26.187059  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:26.217787  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:26.217807  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:28.746733  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:28.757902  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:28.777575  158374 logs.go:282] 0 containers: []
	W1222 23:01:28.777608  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:28.777664  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:28.798343  158374 logs.go:282] 0 containers: []
	W1222 23:01:28.798363  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:28.798420  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:28.816843  158374 logs.go:282] 0 containers: []
	W1222 23:01:28.816859  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:28.816904  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:28.835441  158374 logs.go:282] 0 containers: []
	W1222 23:01:28.835458  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:28.835507  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:28.855127  158374 logs.go:282] 0 containers: []
	W1222 23:01:28.855139  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:28.855195  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:28.873368  158374 logs.go:282] 0 containers: []
	W1222 23:01:28.873381  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:28.873422  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:28.890543  158374 logs.go:282] 0 containers: []
	W1222 23:01:28.890556  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:28.890565  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:28.890575  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:28.918533  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:28.918553  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:28.963368  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:28.963389  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:28.978933  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:28.978951  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:29.035020  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:29.027770   29072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:29.028286   29072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:29.029825   29072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:29.030294   29072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:29.031906   29072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:29.027770   29072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:29.028286   29072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:29.029825   29072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:29.030294   29072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:29.031906   29072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:29.035033  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:29.035044  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:31.565580  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:31.576645  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:31.596094  158374 logs.go:282] 0 containers: []
	W1222 23:01:31.596110  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:31.596173  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:31.615329  158374 logs.go:282] 0 containers: []
	W1222 23:01:31.615345  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:31.615394  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:31.634047  158374 logs.go:282] 0 containers: []
	W1222 23:01:31.634065  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:31.634117  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:31.653553  158374 logs.go:282] 0 containers: []
	W1222 23:01:31.653567  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:31.653626  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:31.671338  158374 logs.go:282] 0 containers: []
	W1222 23:01:31.671354  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:31.671411  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:31.689466  158374 logs.go:282] 0 containers: []
	W1222 23:01:31.689482  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:31.689536  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:31.707731  158374 logs.go:282] 0 containers: []
	W1222 23:01:31.707744  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:31.707753  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:31.707761  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:31.759802  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:31.759828  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:31.776809  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:31.776828  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:31.833983  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:31.827071   29214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:31.827553   29214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:31.829051   29214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:31.829424   29214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:31.830959   29214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:31.827071   29214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:31.827553   29214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:31.829051   29214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:31.829424   29214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:31.830959   29214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:31.833996  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:31.834010  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:31.862984  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:31.863002  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:34.392435  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:34.403543  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:34.422799  158374 logs.go:282] 0 containers: []
	W1222 23:01:34.422814  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:34.422857  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:34.442014  158374 logs.go:282] 0 containers: []
	W1222 23:01:34.442029  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:34.442076  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:34.460995  158374 logs.go:282] 0 containers: []
	W1222 23:01:34.461007  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:34.461056  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:34.479671  158374 logs.go:282] 0 containers: []
	W1222 23:01:34.479688  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:34.479737  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:34.498097  158374 logs.go:282] 0 containers: []
	W1222 23:01:34.498113  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:34.498169  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:34.515924  158374 logs.go:282] 0 containers: []
	W1222 23:01:34.515939  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:34.515985  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:34.534433  158374 logs.go:282] 0 containers: []
	W1222 23:01:34.534447  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:34.534455  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:34.534466  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:34.589495  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:34.581716   29354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:34.582244   29354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:34.583869   29354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:34.584305   29354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:34.586383   29354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:34.581716   29354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:34.582244   29354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:34.583869   29354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:34.584305   29354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:34.586383   29354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:34.589505  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:34.589515  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:34.619333  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:34.619352  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:34.647369  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:34.647385  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:34.692228  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:34.692249  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:37.207836  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:37.218829  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:37.237894  158374 logs.go:282] 0 containers: []
	W1222 23:01:37.237908  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:37.237953  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:37.256000  158374 logs.go:282] 0 containers: []
	W1222 23:01:37.256012  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:37.256054  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:37.275097  158374 logs.go:282] 0 containers: []
	W1222 23:01:37.275112  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:37.275161  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:37.294044  158374 logs.go:282] 0 containers: []
	W1222 23:01:37.294059  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:37.294099  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:37.312979  158374 logs.go:282] 0 containers: []
	W1222 23:01:37.312995  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:37.313041  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:37.331662  158374 logs.go:282] 0 containers: []
	W1222 23:01:37.331675  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:37.331718  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:37.350197  158374 logs.go:282] 0 containers: []
	W1222 23:01:37.350212  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:37.350223  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:37.350233  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:37.397328  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:37.397345  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:37.412835  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:37.412852  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:37.468475  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:37.461747   29517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:37.462295   29517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:37.463832   29517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:37.464240   29517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:37.465745   29517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:37.461747   29517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:37.462295   29517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:37.463832   29517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:37.464240   29517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:37.465745   29517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:37.468487  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:37.468498  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:37.497915  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:37.497934  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:40.027516  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:40.039728  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:01:40.058705  158374 logs.go:282] 0 containers: []
	W1222 23:01:40.058719  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:01:40.058762  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:01:40.076167  158374 logs.go:282] 0 containers: []
	W1222 23:01:40.076184  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:01:40.076231  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:01:40.095983  158374 logs.go:282] 0 containers: []
	W1222 23:01:40.095996  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:01:40.096037  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:01:40.114657  158374 logs.go:282] 0 containers: []
	W1222 23:01:40.114670  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:01:40.114717  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:01:40.133015  158374 logs.go:282] 0 containers: []
	W1222 23:01:40.133028  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:01:40.133070  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:01:40.152127  158374 logs.go:282] 0 containers: []
	W1222 23:01:40.152140  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:01:40.152187  158374 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:01:40.169569  158374 logs.go:282] 0 containers: []
	W1222 23:01:40.169583  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:01:40.169622  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:01:40.169636  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:01:40.184978  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:01:40.184992  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:01:40.238860  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:01:40.231964   29674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:40.232541   29674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:40.234087   29674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:40.234491   29674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:40.236049   29674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:01:40.231964   29674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:40.232541   29674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:40.234087   29674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:40.234491   29674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:01:40.236049   29674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:01:40.238870  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:01:40.238879  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:01:40.268146  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:01:40.268165  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:01:40.295931  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:01:40.295946  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:01:42.843708  158374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:01:42.854816  158374 kubeadm.go:602] duration metric: took 4m1.499731906s to restartPrimaryControlPlane
	W1222 23:01:42.854901  158374 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1222 23:01:42.854978  158374 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1222 23:01:43.257733  158374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 23:01:43.270528  158374 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 23:01:43.278400  158374 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 23:01:43.278439  158374 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 23:01:43.285901  158374 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 23:01:43.285910  158374 kubeadm.go:158] found existing configuration files:
	
	I1222 23:01:43.285947  158374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1222 23:01:43.293882  158374 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 23:01:43.293919  158374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 23:01:43.300825  158374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1222 23:01:43.307941  158374 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 23:01:43.307983  158374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 23:01:43.314784  158374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1222 23:01:43.321699  158374 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 23:01:43.321728  158374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 23:01:43.328397  158374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1222 23:01:43.335272  158374 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 23:01:43.335301  158374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 23:01:43.341949  158374 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 23:01:43.376034  158374 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 23:01:43.376102  158374 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 23:01:43.445165  158374 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 23:01:43.445236  158374 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1222 23:01:43.445264  158374 kubeadm.go:319] OS: Linux
	I1222 23:01:43.445301  158374 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 23:01:43.445350  158374 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 23:01:43.445392  158374 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 23:01:43.445455  158374 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 23:01:43.445494  158374 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 23:01:43.445551  158374 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 23:01:43.445588  158374 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 23:01:43.445673  158374 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 23:01:43.445736  158374 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 23:01:43.500219  158374 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 23:01:43.500396  158374 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 23:01:43.500507  158374 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 23:01:43.513368  158374 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 23:01:43.515533  158374 out.go:252]   - Generating certificates and keys ...
	I1222 23:01:43.515634  158374 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 23:01:43.515681  158374 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 23:01:43.515765  158374 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1222 23:01:43.515820  158374 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1222 23:01:43.515882  158374 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1222 23:01:43.515924  158374 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1222 23:01:43.515975  158374 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1222 23:01:43.516024  158374 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1222 23:01:43.516083  158374 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1222 23:01:43.516151  158374 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1222 23:01:43.516181  158374 kubeadm.go:319] [certs] Using the existing "sa" key
	I1222 23:01:43.516264  158374 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 23:01:43.648060  158374 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 23:01:43.775539  158374 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 23:01:43.806099  158374 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 23:01:43.912171  158374 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 23:01:44.004004  158374 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 23:01:44.004296  158374 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 23:01:44.006366  158374 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 23:01:44.008239  158374 out.go:252]   - Booting up control plane ...
	I1222 23:01:44.008308  158374 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 23:01:44.008365  158374 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 23:01:44.009041  158374 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 23:01:44.026974  158374 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 23:01:44.027088  158374 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 23:01:44.033700  158374 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 23:01:44.033947  158374 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 23:01:44.034017  158374 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 23:01:44.136722  158374 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 23:01:44.136861  158374 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 23:05:44.137086  158374 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000408574s
	I1222 23:05:44.137129  158374 kubeadm.go:319] 
	I1222 23:05:44.137190  158374 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 23:05:44.137216  158374 kubeadm.go:319] 	- The kubelet is not running
	I1222 23:05:44.137303  158374 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 23:05:44.137309  158374 kubeadm.go:319] 
	I1222 23:05:44.137438  158374 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 23:05:44.137492  158374 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 23:05:44.137528  158374 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 23:05:44.137531  158374 kubeadm.go:319] 
	I1222 23:05:44.140373  158374 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1222 23:05:44.140752  158374 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 23:05:44.140849  158374 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 23:05:44.141147  158374 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1222 23:05:44.141156  158374 kubeadm.go:319] 
	I1222 23:05:44.141230  158374 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1222 23:05:44.141360  158374 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000408574s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1222 23:05:44.141451  158374 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1222 23:05:44.555201  158374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 23:05:44.567871  158374 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 23:05:44.567915  158374 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 23:05:44.575883  158374 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 23:05:44.575897  158374 kubeadm.go:158] found existing configuration files:
	
	I1222 23:05:44.575941  158374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1222 23:05:44.583486  158374 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 23:05:44.583527  158374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 23:05:44.590649  158374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1222 23:05:44.597769  158374 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 23:05:44.597806  158374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 23:05:44.604798  158374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1222 23:05:44.611986  158374 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 23:05:44.612034  158374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 23:05:44.619193  158374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1222 23:05:44.626515  158374 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 23:05:44.626555  158374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 23:05:44.633629  158374 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 23:05:44.735033  158374 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1222 23:05:44.735554  158374 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 23:05:44.792296  158374 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 23:09:45.398895  158374 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1222 23:09:45.398970  158374 kubeadm.go:319] 
	I1222 23:09:45.399090  158374 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1222 23:09:45.401586  158374 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 23:09:45.401671  158374 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 23:09:45.401745  158374 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 23:09:45.401791  158374 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1222 23:09:45.401820  158374 kubeadm.go:319] OS: Linux
	I1222 23:09:45.401885  158374 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 23:09:45.401955  158374 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 23:09:45.402023  158374 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 23:09:45.402088  158374 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 23:09:45.402152  158374 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 23:09:45.402201  158374 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 23:09:45.402235  158374 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 23:09:45.402274  158374 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 23:09:45.402309  158374 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 23:09:45.402367  158374 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 23:09:45.402449  158374 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 23:09:45.402536  158374 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 23:09:45.402585  158374 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 23:09:45.404239  158374 out.go:252]   - Generating certificates and keys ...
	I1222 23:09:45.404310  158374 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 23:09:45.404360  158374 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 23:09:45.404421  158374 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1222 23:09:45.404472  158374 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1222 23:09:45.404530  158374 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1222 23:09:45.404569  158374 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1222 23:09:45.404650  158374 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1222 23:09:45.404705  158374 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1222 23:09:45.404761  158374 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1222 23:09:45.404827  158374 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1222 23:09:45.404867  158374 kubeadm.go:319] [certs] Using the existing "sa" key
	I1222 23:09:45.404910  158374 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 23:09:45.404948  158374 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 23:09:45.404989  158374 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 23:09:45.405029  158374 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 23:09:45.405075  158374 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 23:09:45.405115  158374 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 23:09:45.405181  158374 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 23:09:45.405240  158374 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 23:09:45.406503  158374 out.go:252]   - Booting up control plane ...
	I1222 23:09:45.406585  158374 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 23:09:45.406677  158374 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 23:09:45.406738  158374 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 23:09:45.406832  158374 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 23:09:45.406905  158374 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 23:09:45.406993  158374 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 23:09:45.407062  158374 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 23:09:45.407092  158374 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 23:09:45.407211  158374 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 23:09:45.407300  158374 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 23:09:45.407348  158374 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000888774s
	I1222 23:09:45.407350  158374 kubeadm.go:319] 
	I1222 23:09:45.407409  158374 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 23:09:45.407435  158374 kubeadm.go:319] 	- The kubelet is not running
	I1222 23:09:45.407521  158374 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 23:09:45.407524  158374 kubeadm.go:319] 
	I1222 23:09:45.407628  158374 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 23:09:45.407652  158374 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 23:09:45.407675  158374 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 23:09:45.407697  158374 kubeadm.go:319] 
	I1222 23:09:45.407753  158374 kubeadm.go:403] duration metric: took 12m4.079935698s to StartCluster
	I1222 23:09:45.407873  158374 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 23:09:45.407938  158374 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 23:09:45.445003  158374 cri.go:96] found id: ""
	I1222 23:09:45.445021  158374 logs.go:282] 0 containers: []
	W1222 23:09:45.445027  158374 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:09:45.445038  158374 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 23:09:45.445084  158374 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 23:09:45.470772  158374 cri.go:96] found id: ""
	I1222 23:09:45.470788  158374 logs.go:282] 0 containers: []
	W1222 23:09:45.470794  158374 logs.go:284] No container was found matching "etcd"
	I1222 23:09:45.470799  158374 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 23:09:45.470845  158374 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 23:09:45.495903  158374 cri.go:96] found id: ""
	I1222 23:09:45.495920  158374 logs.go:282] 0 containers: []
	W1222 23:09:45.495927  158374 logs.go:284] No container was found matching "coredns"
	I1222 23:09:45.495933  158374 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 23:09:45.495983  158374 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 23:09:45.523926  158374 cri.go:96] found id: ""
	I1222 23:09:45.523943  158374 logs.go:282] 0 containers: []
	W1222 23:09:45.523952  158374 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:09:45.523960  158374 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 23:09:45.524021  158374 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 23:09:45.551137  158374 cri.go:96] found id: ""
	I1222 23:09:45.551153  158374 logs.go:282] 0 containers: []
	W1222 23:09:45.551164  158374 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:09:45.551171  158374 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 23:09:45.551226  158374 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 23:09:45.576565  158374 cri.go:96] found id: ""
	I1222 23:09:45.576583  158374 logs.go:282] 0 containers: []
	W1222 23:09:45.576611  158374 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:09:45.576621  158374 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 23:09:45.576676  158374 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 23:09:45.601969  158374 cri.go:96] found id: ""
	I1222 23:09:45.601983  158374 logs.go:282] 0 containers: []
	W1222 23:09:45.601991  158374 logs.go:284] No container was found matching "kindnet"
	I1222 23:09:45.602003  158374 logs.go:123] Gathering logs for kubelet ...
	I1222 23:09:45.602018  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:09:45.650169  158374 logs.go:123] Gathering logs for dmesg ...
	I1222 23:09:45.650193  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:09:45.665853  158374 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:09:45.665870  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:09:45.722796  158374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:09:45.715772   37272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:09:45.716296   37272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:09:45.717847   37272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:09:45.718263   37272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:09:45.719823   37272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:09:45.715772   37272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:09:45.716296   37272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:09:45.717847   37272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:09:45.718263   37272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:09:45.719823   37272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:09:45.722812  158374 logs.go:123] Gathering logs for Docker ...
	I1222 23:09:45.722824  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:09:45.751752  158374 logs.go:123] Gathering logs for container status ...
	I1222 23:09:45.751775  158374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 23:09:45.780185  158374 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000888774s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1222 23:09:45.780232  158374 out.go:285] * 
	W1222 23:09:45.780327  158374 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000888774s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 23:09:45.780349  158374 out.go:285] * 
	W1222 23:09:45.780644  158374 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 23:09:45.784145  158374 out.go:203] 
	W1222 23:09:45.785152  158374 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000888774s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 23:09:45.785198  158374 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1222 23:09:45.785226  158374 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1222 23:09:45.786470  158374 out.go:203] 
	
	
	==> Docker <==
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.415844375Z" level=info msg="Loading containers: done."
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.427335656Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.427378701Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.427416508Z" level=info msg="Initializing buildkit"
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.445627752Z" level=info msg="Completed buildkit initialization"
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.450569156Z" level=info msg="Daemon has completed initialization"
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.450666768Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.450705582Z" level=info msg="API listen on /run/docker.sock"
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.450724731Z" level=info msg="API listen on [::]:2376"
	Dec 22 22:57:39 functional-384766 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 22 22:57:39 functional-384766 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 22 22:57:39 functional-384766 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 22 22:57:39 functional-384766 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 22 22:57:39 functional-384766 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Start docker client with request timeout 0s"
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Loaded network plugin cni"
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 22 22:57:39 functional-384766 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:09:52.580273   38193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:09:52.580892   38193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:09:52.582405   38193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:09:52.582901   38193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:09:52.584407   38193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff da 9e 7f a3 27 cb 08 06
	[  +0.239045] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6e eb f7 fd 0a 48 08 06
	[  +0.170967] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 16 5a dc 65 fc cc 08 06
	[Dec22 22:37] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 66 cb ee 90 55 2b 08 06
	[  +0.000450] IPv4: martian source 10.244.0.32 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff be 43 50 0c dd 15 08 06
	[  +0.000658] IPv4: martian source 10.244.0.32 from 10.244.0.7, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e 41 3c 76 23 2b 08 06
	[  +1.709294] IPv4: martian source 10.244.0.31 from 10.244.0.26, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff be b6 30 85 5f 4e 08 06
	[  +0.532867] IPv4: martian source 10.244.0.26 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff be 43 50 0c dd 15 08 06
	[Dec22 22:39] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 46 b7 49 09 f9 e0 08 06
	[  +0.006417] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1e e5 c5 4f 67 2b 08 06
	[Dec22 22:40] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 22 2e 10 70 70 25 08 06
	[Dec22 22:41] IPv4: martian source 10.244.0.1 from 10.244.0.6, on dev eth0
	[  +0.000034] ll header: 00000000: ff ff ff ff ff ff ee d7 ae 32 ba c5 08 06
	[Dec22 22:42] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 95 cb 2f 8e 91 08 06
	
	
	==> kernel <==
	 23:09:52 up  2:52,  0 user,  load average: 0.19, 0.13, 0.33
	Linux functional-384766 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 23:09:49 functional-384766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 23:09:49 functional-384766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 326.
	Dec 22 23:09:49 functional-384766 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:09:49 functional-384766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:09:50 functional-384766 kubelet[37775]: E1222 23:09:50.041254   37775 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 23:09:50 functional-384766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 23:09:50 functional-384766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 23:09:50 functional-384766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 327.
	Dec 22 23:09:50 functional-384766 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:09:50 functional-384766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:09:50 functional-384766 kubelet[37909]: E1222 23:09:50.790497   37909 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 23:09:50 functional-384766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 23:09:50 functional-384766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 23:09:51 functional-384766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 328.
	Dec 22 23:09:51 functional-384766 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:09:51 functional-384766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:09:51 functional-384766 kubelet[37983]: E1222 23:09:51.540711   37983 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 23:09:51 functional-384766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 23:09:51 functional-384766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 23:09:52 functional-384766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 329.
	Dec 22 23:09:52 functional-384766 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:09:52 functional-384766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:09:52 functional-384766 kubelet[38084]: E1222 23:09:52.308510   38084 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 23:09:52 functional-384766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 23:09:52 functional-384766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-384766 -n functional-384766
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-384766 -n functional-384766: exit status 2 (407.123732ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-384766" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL (2.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels (1.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-384766 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:234: (dbg) Non-zero exit: kubectl --context functional-384766 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (53.100383ms)

                                                
                                                
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:236: failed to 'kubectl get nodes' with args "kubectl --context functional-384766 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:242: expected to have label "minikube.k8s.io/commit" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/version" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/name" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/primary" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-384766
helpers_test.go:244: (dbg) docker inspect functional-384766:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c",
	        "Created": "2025-12-22T22:43:03.818900502Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 134904,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T22:43:03.847527913Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9a87e850a5e640dd3e5f71477885272b970ba271e3722be8bebbe0157f704ffd",
	        "ResolvConfPath": "/var/lib/docker/containers/e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c/hostname",
	        "HostsPath": "/var/lib/docker/containers/e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c/hosts",
	        "LogPath": "/var/lib/docker/containers/e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c/e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c-json.log",
	        "Name": "/functional-384766",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-384766:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-384766",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e126b999cc063ee0a68492e79491a8674b8fc6008cc067cb30902412e51fc42c",
	                "LowerDir": "/var/lib/docker/overlay2/3e3d10c0ae87018d46767d6a2bb62611a8b9a288f6938e75c60f3cd57119d4bf-init/diff:/var/lib/docker/overlay2/c57dd1a41102d99c4ed6be3c60b871435428bd2cea6a3d8d172f0a67527ba009/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3e3d10c0ae87018d46767d6a2bb62611a8b9a288f6938e75c60f3cd57119d4bf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3e3d10c0ae87018d46767d6a2bb62611a8b9a288f6938e75c60f3cd57119d4bf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3e3d10c0ae87018d46767d6a2bb62611a8b9a288f6938e75c60f3cd57119d4bf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-384766",
	                "Source": "/var/lib/docker/volumes/functional-384766/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-384766",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-384766",
	                "name.minikube.sigs.k8s.io": "functional-384766",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d6f65d275ad1e1cfaea153f23b0c094464e089c27de9a12387045fa2c863e00e",
	            "SandboxKey": "/var/run/docker/netns/d6f65d275ad1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-384766": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1b177601c4f3a252e4feb1553da3a4110e40d5b9ed2bd5de6789f2bc9f8f5c2b",
	                    "EndpointID": "2c787f98c5d836612c102f7592dc2eccfef09327c2a6cadf1319fd6559b5eca8",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "d6:90:04:78:9b:e3",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-384766",
	                        "e126b999cc06"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-384766 -n functional-384766
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-384766 -n functional-384766: exit status 2 (298.606716ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                        ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount     │ -p functional-384766 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun105456838/001:/mount-9p --alsologtostderr -v=1              │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │                     │
	│ ssh       │ functional-384766 ssh findmnt -T /mount-9p | grep 9p                                                                                               │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │ 22 Dec 25 23:09 UTC │
	│ ssh       │ functional-384766 ssh -- ls -la /mount-9p                                                                                                          │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │ 22 Dec 25 23:09 UTC │
	│ ssh       │ functional-384766 ssh cat /mount-9p/test-1766444997145080367                                                                                       │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │ 22 Dec 25 23:09 UTC │
	│ ssh       │ functional-384766 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                   │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │                     │
	│ ssh       │ functional-384766 ssh sudo umount -f /mount-9p                                                                                                     │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │ 22 Dec 25 23:09 UTC │
	│ mount     │ -p functional-384766 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun683538616/001:/mount-9p --alsologtostderr -v=1 --port 39301 │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │                     │
	│ ssh       │ functional-384766 ssh findmnt -T /mount-9p | grep 9p                                                                                               │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:09 UTC │                     │
	│ start     │ -p functional-384766 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-rc.1      │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:10 UTC │                     │
	│ start     │ -p functional-384766 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-rc.1      │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:10 UTC │                     │
	│ start     │ -p functional-384766 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-rc.1                │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:10 UTC │                     │
	│ ssh       │ functional-384766 ssh findmnt -T /mount-9p | grep 9p                                                                                               │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:10 UTC │ 22 Dec 25 23:10 UTC │
	│ dashboard │ --url --port 0 -p functional-384766 --alsologtostderr -v=1                                                                                         │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:10 UTC │                     │
	│ ssh       │ functional-384766 ssh -- ls -la /mount-9p                                                                                                          │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:10 UTC │ 22 Dec 25 23:10 UTC │
	│ ssh       │ functional-384766 ssh sudo umount -f /mount-9p                                                                                                     │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:10 UTC │                     │
	│ mount     │ -p functional-384766 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1192915267/001:/mount3 --alsologtostderr -v=1               │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:10 UTC │                     │
	│ mount     │ -p functional-384766 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1192915267/001:/mount1 --alsologtostderr -v=1               │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:10 UTC │                     │
	│ ssh       │ functional-384766 ssh findmnt -T /mount1                                                                                                           │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:10 UTC │                     │
	│ mount     │ -p functional-384766 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1192915267/001:/mount2 --alsologtostderr -v=1               │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:10 UTC │                     │
	│ ssh       │ functional-384766 ssh findmnt -T /mount1                                                                                                           │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:10 UTC │ 22 Dec 25 23:10 UTC │
	│ ssh       │ functional-384766 ssh sudo systemctl is-active crio                                                                                                │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:10 UTC │                     │
	│ ssh       │ functional-384766 ssh findmnt -T /mount2                                                                                                           │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:10 UTC │ 22 Dec 25 23:10 UTC │
	│ license   │                                                                                                                                                    │ minikube          │ jenkins │ v1.37.0 │ 22 Dec 25 23:10 UTC │ 22 Dec 25 23:10 UTC │
	│ ssh       │ functional-384766 ssh findmnt -T /mount3                                                                                                           │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:10 UTC │ 22 Dec 25 23:10 UTC │
	│ mount     │ -p functional-384766 --kill=true                                                                                                                   │ functional-384766 │ jenkins │ v1.37.0 │ 22 Dec 25 23:10 UTC │                     │
	└───────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 23:10:00
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 23:10:00.471084  189065 out.go:360] Setting OutFile to fd 1 ...
	I1222 23:10:00.471344  189065 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 23:10:00.471353  189065 out.go:374] Setting ErrFile to fd 2...
	I1222 23:10:00.471358  189065 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 23:10:00.471534  189065 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-72233/.minikube/bin
	I1222 23:10:00.471985  189065 out.go:368] Setting JSON to false
	I1222 23:10:00.472909  189065 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":10340,"bootTime":1766434660,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1222 23:10:00.472959  189065 start.go:143] virtualization: kvm guest
	I1222 23:10:00.474587  189065 out.go:179] * [functional-384766] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1222 23:10:00.475694  189065 out.go:179]   - MINIKUBE_LOCATION=22301
	I1222 23:10:00.475703  189065 notify.go:221] Checking for updates...
	I1222 23:10:00.477739  189065 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 23:10:00.478849  189065 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1222 23:10:00.479809  189065 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-72233/.minikube
	I1222 23:10:00.480818  189065 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1222 23:10:00.482103  189065 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 23:10:00.483637  189065 config.go:182] Loaded profile config "functional-384766": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1222 23:10:00.484147  189065 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 23:10:00.508451  189065 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1222 23:10:00.508633  189065 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 23:10:00.567029  189065 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-12-22 23:10:00.557282865 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1222 23:10:00.567130  189065 docker.go:319] overlay module found
	I1222 23:10:00.568775  189065 out.go:179] * Using the docker driver based on existing profile
	I1222 23:10:00.569819  189065 start.go:309] selected driver: docker
	I1222 23:10:00.569832  189065 start.go:928] validating driver "docker" against &{Name:functional-384766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-384766 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 23:10:00.569903  189065 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 23:10:00.570021  189065 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 23:10:00.622749  189065 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-12-22 23:10:00.61321899 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be removed
by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1222 23:10:00.623382  189065 cni.go:84] Creating CNI manager for ""
	I1222 23:10:00.623470  189065 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1222 23:10:00.623528  189065 start.go:353] cluster config:
	{Name:functional-384766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-384766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 23:10:00.625075  189065 out.go:179] * dry-run validation complete!
	
	
	==> Docker <==
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.415844375Z" level=info msg="Loading containers: done."
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.427335656Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.427378701Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.427416508Z" level=info msg="Initializing buildkit"
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.445627752Z" level=info msg="Completed buildkit initialization"
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.450569156Z" level=info msg="Daemon has completed initialization"
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.450666768Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.450705582Z" level=info msg="API listen on /run/docker.sock"
	Dec 22 22:57:39 functional-384766 dockerd[17979]: time="2025-12-22T22:57:39.450724731Z" level=info msg="API listen on [::]:2376"
	Dec 22 22:57:39 functional-384766 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 22 22:57:39 functional-384766 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 22 22:57:39 functional-384766 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 22 22:57:39 functional-384766 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 22 22:57:39 functional-384766 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Start docker client with request timeout 0s"
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Loaded network plugin cni"
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 22 22:57:39 functional-384766 cri-dockerd[18283]: time="2025-12-22T22:57:39Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 22 22:57:39 functional-384766 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:10:04.382691   39832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:10:04.383186   39832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:10:04.384748   39832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:10:04.385234   39832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1222 23:10:04.386754   39832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff da 9e 7f a3 27 cb 08 06
	[  +0.239045] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6e eb f7 fd 0a 48 08 06
	[  +0.170967] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 16 5a dc 65 fc cc 08 06
	[Dec22 22:37] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 66 cb ee 90 55 2b 08 06
	[  +0.000450] IPv4: martian source 10.244.0.32 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff be 43 50 0c dd 15 08 06
	[  +0.000658] IPv4: martian source 10.244.0.32 from 10.244.0.7, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e 41 3c 76 23 2b 08 06
	[  +1.709294] IPv4: martian source 10.244.0.31 from 10.244.0.26, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff be b6 30 85 5f 4e 08 06
	[  +0.532867] IPv4: martian source 10.244.0.26 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff be 43 50 0c dd 15 08 06
	[Dec22 22:39] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 46 b7 49 09 f9 e0 08 06
	[  +0.006417] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1e e5 c5 4f 67 2b 08 06
	[Dec22 22:40] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 22 2e 10 70 70 25 08 06
	[Dec22 22:41] IPv4: martian source 10.244.0.1 from 10.244.0.6, on dev eth0
	[  +0.000034] ll header: 00000000: ff ff ff ff ff ff ee d7 ae 32 ba c5 08 06
	[Dec22 22:42] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 95 cb 2f 8e 91 08 06
	
	
	==> kernel <==
	 23:10:04 up  2:52,  0 user,  load average: 0.86, 0.28, 0.38
	Linux functional-384766 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 23:10:01 functional-384766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 23:10:01 functional-384766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 342.
	Dec 22 23:10:01 functional-384766 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:10:01 functional-384766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:10:01 functional-384766 kubelet[39436]: E1222 23:10:01.981193   39436 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 23:10:01 functional-384766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 23:10:01 functional-384766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 23:10:02 functional-384766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 343.
	Dec 22 23:10:02 functional-384766 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:10:02 functional-384766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:10:02 functional-384766 kubelet[39633]: E1222 23:10:02.786001   39633 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 23:10:02 functional-384766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 23:10:02 functional-384766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 23:10:03 functional-384766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 344.
	Dec 22 23:10:03 functional-384766 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:10:03 functional-384766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:10:03 functional-384766 kubelet[39691]: E1222 23:10:03.546642   39691 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 23:10:03 functional-384766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 23:10:03 functional-384766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 23:10:04 functional-384766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 345.
	Dec 22 23:10:04 functional-384766 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:10:04 functional-384766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:10:04 functional-384766 kubelet[39782]: E1222 23:10:04.283792   39782 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 23:10:04 functional-384766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 23:10:04 functional-384766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-384766 -n functional-384766
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-384766 -n functional-384766: exit status 2 (296.237476ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-384766" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels (1.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv/bash (0.75s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv/bash
functional_test.go:514: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-384766 docker-env) && out/minikube-linux-amd64 status -p functional-384766"
functional_test.go:514: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-384766 docker-env) && out/minikube-linux-amd64 status -p functional-384766": exit status 2 (748.158176ms)

                                                
                                                
-- stdout --
	functional-384766
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	docker-env: in-use
	

                                                
                                                
-- /stdout --
functional_test.go:520: failed to do status after eval-ing docker-env. error: exit status 2
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv/bash (0.75s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel (0.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-384766 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-384766 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 103. stderr: I1222 23:09:53.486507  184741 out.go:360] Setting OutFile to fd 1 ...
I1222 23:09:53.486717  184741 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 23:09:53.486796  184741 out.go:374] Setting ErrFile to fd 2...
I1222 23:09:53.486816  184741 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 23:09:53.487147  184741 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-72233/.minikube/bin
I1222 23:09:53.487566  184741 mustload.go:66] Loading cluster: functional-384766
I1222 23:09:53.488209  184741 config.go:182] Loaded profile config "functional-384766": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
I1222 23:09:53.488837  184741 cli_runner.go:164] Run: docker container inspect functional-384766 --format={{.State.Status}}
I1222 23:09:53.512339  184741 host.go:66] Checking if "functional-384766" exists ...
I1222 23:09:53.512759  184741 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1222 23:09:53.592947  184741 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-12-22 23:09:53.581333236 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be removed
by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1222 23:09:53.593105  184741 api_server.go:166] Checking apiserver status ...
I1222 23:09:53.593159  184741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1222 23:09:53.593201  184741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
I1222 23:09:53.613906  184741 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
W1222 23:09:53.721046  184741 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1222 23:09:53.723034  184741 out.go:179] * The control-plane node functional-384766 apiserver is not running: (state=Stopped)
I1222 23:09:53.724240  184741 out.go:179]   To start a cluster, run: "minikube start -p functional-384766"

                                                
                                                
stdout: * The control-plane node functional-384766 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-384766"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-384766 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-amd64 -p functional-384766 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-amd64 -p functional-384766 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-384766 tunnel --alsologtostderr] ...
helpers_test.go:520: unable to terminate pid 184740: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-amd64 -p functional-384766 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-amd64 -p functional-384766 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel (0.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-384766 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-384766 apply -f testdata/testsvc.yaml: exit status 1 (59.123661ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/testsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:214: kubectl --context functional-384766 apply -f testdata/testsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect (108.74s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://10.109.193.124": Temporary Error: Get "http://10.109.193.124": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-384766 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-384766 get svc nginx-svc: exit status 1 (53.436777ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-384766 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect (108.74s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-384766 create deployment hello-node --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1456: (dbg) Non-zero exit: kubectl --context functional-384766 create deployment hello-node --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server: exit status 1 (52.655988ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1458: failed to create hello-node deployment with this command "kubectl --context functional-384766 create deployment hello-node --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server": exit status 1.
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List
functional_test.go:1474: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 service list
functional_test.go:1474: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-384766 service list: exit status 103 (279.905384ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-384766 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-384766"

                                                
                                                
-- /stdout --
functional_test.go:1476: failed to do service list. args "out/minikube-linux-amd64 -p functional-384766 service list" : exit status 103
functional_test.go:1479: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-384766 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-384766\"\n"-
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput
functional_test.go:1504: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 service list -o json
functional_test.go:1504: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-384766 service list -o json: exit status 103 (269.095887ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-384766 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-384766"

                                                
                                                
-- /stdout --
functional_test.go:1506: failed to list services with json format. args "out/minikube-linux-amd64 -p functional-384766 service list -o json": exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS
functional_test.go:1524: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 service --namespace=default --https --url hello-node
functional_test.go:1524: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-384766 service --namespace=default --https --url hello-node: exit status 103 (302.500356ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-384766 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-384766"

                                                
                                                
-- /stdout --
functional_test.go:1526: failed to get service url. args "out/minikube-linux-amd64 -p functional-384766 service --namespace=default --https --url hello-node" : exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 service hello-node --url --format={{.IP}}
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-384766 service hello-node --url --format={{.IP}}: exit status 103 (258.569558ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-384766 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-384766"

                                                
                                                
-- /stdout --
functional_test.go:1557: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-384766 service hello-node --url --format={{.IP}}": exit status 103
functional_test.go:1563: "* The control-plane node functional-384766 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-384766\"" is not a valid IP
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL
functional_test.go:1574: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 service hello-node --url
functional_test.go:1574: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-384766 service hello-node --url: exit status 103 (269.778594ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-384766 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-384766"

                                                
                                                
-- /stdout --
functional_test.go:1576: failed to get service url. args: "out/minikube-linux-amd64 -p functional-384766 service hello-node --url": exit status 103
functional_test.go:1580: found endpoint for hello-node: * The control-plane node functional-384766 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-384766"
functional_test.go:1584: failed to parse "* The control-plane node functional-384766 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-384766\"": parse "* The control-plane node functional-384766 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-384766\"": net/url: invalid control character in URL
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port (2.69s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port
functional_test_mount_test.go:74: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-384766 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun105456838/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:108: wrote "test-1766444997145080367" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun105456838/001/created-by-test
functional_test_mount_test.go:108: wrote "test-1766444997145080367" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun105456838/001/created-by-test-removed-by-pod
functional_test_mount_test.go:108: wrote "test-1766444997145080367" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun105456838/001/test-1766444997145080367
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:116: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-384766 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (304.275708ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1222 23:09:57.449743   75803 retry.go:84] will retry after 700ms: exit status 1 (duplicate log for 28m24.1s)
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 ssh -- ls -la /mount-9p
functional_test_mount_test.go:134: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 22 23:09 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 22 23:09 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 22 23:09 test-1766444997145080367
functional_test_mount_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 ssh cat /mount-9p/test-1766444997145080367
functional_test_mount_test.go:149: (dbg) Run:  kubectl --context functional-384766 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:149: (dbg) Non-zero exit: kubectl --context functional-384766 replace --force -f testdata/busybox-mount-test.yaml: exit status 1 (49.403899ms)

                                                
                                                
** stderr ** 
	error: error when deleting "testdata/busybox-mount-test.yaml": Delete "https://192.168.49.2:8441/api/v1/namespaces/default/pods/busybox-mount": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test_mount_test.go:151: failed to 'kubectl replace' for busybox-mount-test. args "kubectl --context functional-384766 replace --force -f testdata/busybox-mount-test.yaml" : exit status 1
functional_test_mount_test.go:81: "TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:82: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:82: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-384766 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (285.032161ms)

                                                
                                                
-- stdout --
	192.168.49.1 on /mount-9p type 9p (rw,relatime,dfltuid=1000,dfltgid=997,access=any,msize=262144,trans=tcp,noextend,port=37521)
	total 2
	-rw-r--r-- 1 docker docker 24 Dec 22 23:09 created-by-test
	-rw-r--r-- 1 docker docker 24 Dec 22 23:09 created-by-test-removed-by-pod
	-rw-r--r-- 1 docker docker 24 Dec 22 23:09 test-1766444997145080367
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:84: debugging command "out/minikube-linux-amd64 -p functional-384766 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:95: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-384766 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun105456838/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:95: (dbg) [out/minikube-linux-amd64 mount -p functional-384766 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun105456838/001:/mount-9p --alsologtostderr -v=1] stdout:
* Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun105456838/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.49.1:37521
* Userspace file server: 
ufs starting
* Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun105456838/001 to /mount-9p

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
* Unmounting /mount-9p ...

                                                
                                                

                                                
                                                
functional_test_mount_test.go:95: (dbg) [out/minikube-linux-amd64 mount -p functional-384766 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun105456838/001:/mount-9p --alsologtostderr -v=1] stderr:
I1222 23:09:57.205636  187042 out.go:360] Setting OutFile to fd 1 ...
I1222 23:09:57.205968  187042 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 23:09:57.205978  187042 out.go:374] Setting ErrFile to fd 2...
I1222 23:09:57.205983  187042 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 23:09:57.206183  187042 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-72233/.minikube/bin
I1222 23:09:57.206423  187042 mustload.go:66] Loading cluster: functional-384766
I1222 23:09:57.206779  187042 config.go:182] Loaded profile config "functional-384766": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
I1222 23:09:57.208845  187042 cli_runner.go:164] Run: docker container inspect functional-384766 --format={{.State.Status}}
I1222 23:09:57.227707  187042 host.go:66] Checking if "functional-384766" exists ...
I1222 23:09:57.228004  187042 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1222 23:09:57.293098  187042 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-12-22 23:09:57.281942526 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be removed
by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1222 23:09:57.293279  187042 cli_runner.go:164] Run: docker network inspect functional-384766 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1222 23:09:57.319881  187042 out.go:179] * Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun105456838/001 into VM as /mount-9p ...
I1222 23:09:57.321311  187042 out.go:179]   - Mount type:   9p
I1222 23:09:57.322497  187042 out.go:179]   - User ID:      docker
I1222 23:09:57.323680  187042 out.go:179]   - Group ID:     docker
I1222 23:09:57.325014  187042 out.go:179]   - Version:      9p2000.L
I1222 23:09:57.326051  187042 out.go:179]   - Message Size: 262144
I1222 23:09:57.327046  187042 out.go:179]   - Options:      map[]
I1222 23:09:57.328100  187042 out.go:179]   - Bind Address: 192.168.49.1:37521
I1222 23:09:57.329151  187042 out.go:179] * Userspace file server: 
I1222 23:09:57.329437  187042 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1222 23:09:57.329539  187042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
I1222 23:09:57.351178  187042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
I1222 23:09:57.455648  187042 mount.go:180] unmount for /mount-9p ran successfully
I1222 23:09:57.455681  187042 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I1222 23:09:57.465568  187042 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=37521,trans=tcp,version=9p2000.L 192.168.49.1 /mount-9p"
I1222 23:09:57.477394  187042 main.go:127] stdlog: ufs.go:141 connected
I1222 23:09:57.477537  187042 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:49052 Tversion tag 65535 msize 262144 version '9P2000.L'
I1222 23:09:57.477583  187042 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:49052 Rversion tag 65535 msize 262144 version '9P2000'
I1222 23:09:57.477798  187042 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:49052 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I1222 23:09:57.477881  187042 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:49052 Rattach tag 0 aqid (20fa2d2 48539a17 'd')
I1222 23:09:57.478158  187042 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:49052 Tstat tag 0 fid 0
I1222 23:09:57.478297  187042 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:49052 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa2d2 48539a17 'd') m d775 at 0 mt 1766444997 l 4096 t 0 d 0 ext )
I1222 23:09:57.479690  187042 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/.mount-process: {Name:mk79995f9ba51d8429a24cb058885a78af4a55bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1222 23:09:57.479895  187042 mount.go:105] mount successful: ""
I1222 23:09:57.481849  187042 out.go:179] * Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun105456838/001 to /mount-9p
I1222 23:09:57.482878  187042 out.go:203] 
I1222 23:09:57.483999  187042 out.go:179] * NOTE: This process must stay alive for the mount to be accessible ...
I1222 23:09:58.733758  187042 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:49052 Tstat tag 0 fid 0
I1222 23:09:58.733898  187042 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:49052 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa2d2 48539a17 'd') m d775 at 0 mt 1766444997 l 4096 t 0 d 0 ext )
I1222 23:09:58.734270  187042 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:49052 Twalk tag 0 fid 0 newfid 1 
I1222 23:09:58.734366  187042 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:49052 Rwalk tag 0 
I1222 23:09:58.734506  187042 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:49052 Topen tag 0 fid 1 mode 0
I1222 23:09:58.734571  187042 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:49052 Ropen tag 0 qid (20fa2d2 48539a17 'd') iounit 0
I1222 23:09:58.734723  187042 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:49052 Tstat tag 0 fid 0
I1222 23:09:58.734883  187042 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:49052 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa2d2 48539a17 'd') m d775 at 0 mt 1766444997 l 4096 t 0 d 0 ext )
I1222 23:09:58.735898  187042 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:49052 Tread tag 0 fid 1 offset 0 count 262120
I1222 23:09:58.736147  187042 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:49052 Rread tag 0 count 258
I1222 23:09:58.736314  187042 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:49052 Tread tag 0 fid 1 offset 258 count 261862
I1222 23:09:58.736356  187042 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:49052 Rread tag 0 count 0
I1222 23:09:58.736495  187042 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:49052 Tread tag 0 fid 1 offset 258 count 262120
I1222 23:09:58.736531  187042 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:49052 Rread tag 0 count 0
I1222 23:09:58.736677  187042 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:49052 Twalk tag 0 fid 0 newfid 2 0:'test-1766444997145080367' 
I1222 23:09:58.736727  187042 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:49052 Rwalk tag 0 (20fa2d5 48539a17 '') 
I1222 23:09:58.736825  187042 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:49052 Tstat tag 0 fid 2
I1222 23:09:58.736926  187042 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:49052 Rstat tag 0 st ('test-1766444997145080367' 'jenkins' 'balintp' '' q (20fa2d5 48539a17 '') m 644 at 0 mt 1766444997 l 24 t 0 d 0 ext )
I1222 23:09:58.737026  187042 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:49052 Tstat tag 0 fid 2
I1222 23:09:58.737124  187042 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:49052 Rstat tag 0 st ('test-1766444997145080367' 'jenkins' 'balintp' '' q (20fa2d5 48539a17 '') m 644 at 0 mt 1766444997 l 24 t 0 d 0 ext )
I1222 23:09:58.737256  187042 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:49052 Tclunk tag 0 fid 2
I1222 23:09:58.737288  187042 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:49052 Rclunk tag 0
I1222 23:09:58.737386  187042 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:49052 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1222 23:09:58.737444  187042 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:49052 Rwalk tag 0 (20fa2d4 48539a17 '') 
I1222 23:09:58.737533  187042 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:49052 Tstat tag 0 fid 2
I1222 23:09:58.737664  187042 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:49052 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa2d4 48539a17 '') m 644 at 0 mt 1766444997 l 24 t 0 d 0 ext )
I1222 23:09:58.737756  187042 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:49052 Tstat tag 0 fid 2
I1222 23:09:58.737846  187042 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:49052 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa2d4 48539a17 '') m 644 at 0 mt 1766444997 l 24 t 0 d 0 ext )
I1222 23:09:58.737948  187042 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:49052 Tclunk tag 0 fid 2
I1222 23:09:58.737974  187042 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:49052 Rclunk tag 0
I1222 23:09:58.738093  187042 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:49052 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1222 23:09:58.738134  187042 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:49052 Rwalk tag 0 (20fa2d3 48539a17 '') 
I1222 23:09:58.738220  187042 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:49052 Tstat tag 0 fid 2
I1222 23:09:58.738290  187042 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:49052 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa2d3 48539a17 '') m 644 at 0 mt 1766444997 l 24 t 0 d 0 ext )
I1222 23:09:58.738390  187042 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:49052 Tstat tag 0 fid 2
I1222 23:09:58.738451  187042 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:49052 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa2d3 48539a17 '') m 644 at 0 mt 1766444997 l 24 t 0 d 0 ext )
I1222 23:09:58.738542  187042 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:49052 Tclunk tag 0 fid 2
I1222 23:09:58.738560  187042 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:49052 Rclunk tag 0
I1222 23:09:58.738772  187042 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:49052 Tread tag 0 fid 1 offset 258 count 262120
I1222 23:09:58.738810  187042 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:49052 Rread tag 0 count 0
I1222 23:09:58.738928  187042 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:49052 Tclunk tag 0 fid 1
I1222 23:09:58.738957  187042 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:49052 Rclunk tag 0
I1222 23:09:59.049845  187042 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:49052 Twalk tag 0 fid 0 newfid 1 0:'test-1766444997145080367' 
I1222 23:09:59.049921  187042 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:49052 Rwalk tag 0 (20fa2d5 48539a17 '') 
I1222 23:09:59.050093  187042 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:49052 Tstat tag 0 fid 1
I1222 23:09:59.050225  187042 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:49052 Rstat tag 0 st ('test-1766444997145080367' 'jenkins' 'balintp' '' q (20fa2d5 48539a17 '') m 644 at 0 mt 1766444997 l 24 t 0 d 0 ext )
I1222 23:09:59.050342  187042 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:49052 Twalk tag 0 fid 1 newfid 2 
I1222 23:09:59.050384  187042 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:49052 Rwalk tag 0 
I1222 23:09:59.050483  187042 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:49052 Topen tag 0 fid 2 mode 0
I1222 23:09:59.050551  187042 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:49052 Ropen tag 0 qid (20fa2d5 48539a17 '') iounit 0
I1222 23:09:59.050660  187042 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:49052 Tstat tag 0 fid 1
I1222 23:09:59.050759  187042 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:49052 Rstat tag 0 st ('test-1766444997145080367' 'jenkins' 'balintp' '' q (20fa2d5 48539a17 '') m 644 at 0 mt 1766444997 l 24 t 0 d 0 ext )
I1222 23:09:59.050971  187042 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:49052 Tread tag 0 fid 2 offset 0 count 24
I1222 23:09:59.051039  187042 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:49052 Rread tag 0 count 24
I1222 23:09:59.051175  187042 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:49052 Tclunk tag 0 fid 2
I1222 23:09:59.051215  187042 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:49052 Rclunk tag 0
I1222 23:09:59.051361  187042 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:49052 Tclunk tag 0 fid 1
I1222 23:09:59.051397  187042 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:49052 Rclunk tag 0
I1222 23:09:59.389628  187042 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:49052 Tstat tag 0 fid 0
I1222 23:09:59.389787  187042 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:49052 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa2d2 48539a17 'd') m d775 at 0 mt 1766444997 l 4096 t 0 d 0 ext )
I1222 23:09:59.390121  187042 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:49052 Twalk tag 0 fid 0 newfid 1 
I1222 23:09:59.390169  187042 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:49052 Rwalk tag 0 
I1222 23:09:59.390305  187042 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:49052 Topen tag 0 fid 1 mode 0
I1222 23:09:59.390372  187042 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:49052 Ropen tag 0 qid (20fa2d2 48539a17 'd') iounit 0
I1222 23:09:59.390505  187042 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:49052 Tstat tag 0 fid 0
I1222 23:09:59.390626  187042 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:49052 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa2d2 48539a17 'd') m d775 at 0 mt 1766444997 l 4096 t 0 d 0 ext )
I1222 23:09:59.390868  187042 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:49052 Tread tag 0 fid 1 offset 0 count 262120
I1222 23:09:59.391026  187042 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:49052 Rread tag 0 count 258
I1222 23:09:59.391171  187042 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:49052 Tread tag 0 fid 1 offset 258 count 261862
I1222 23:09:59.391213  187042 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:49052 Rread tag 0 count 0
I1222 23:09:59.391350  187042 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:49052 Tread tag 0 fid 1 offset 258 count 262120
I1222 23:09:59.391384  187042 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:49052 Rread tag 0 count 0
I1222 23:09:59.391493  187042 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:49052 Twalk tag 0 fid 0 newfid 2 0:'test-1766444997145080367' 
I1222 23:09:59.391545  187042 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:49052 Rwalk tag 0 (20fa2d5 48539a17 '') 
I1222 23:09:59.391659  187042 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:49052 Tstat tag 0 fid 2
I1222 23:09:59.391740  187042 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:49052 Rstat tag 0 st ('test-1766444997145080367' 'jenkins' 'balintp' '' q (20fa2d5 48539a17 '') m 644 at 0 mt 1766444997 l 24 t 0 d 0 ext )
I1222 23:09:59.391849  187042 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:49052 Tstat tag 0 fid 2
I1222 23:09:59.391951  187042 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:49052 Rstat tag 0 st ('test-1766444997145080367' 'jenkins' 'balintp' '' q (20fa2d5 48539a17 '') m 644 at 0 mt 1766444997 l 24 t 0 d 0 ext )
I1222 23:09:59.392056  187042 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:49052 Tclunk tag 0 fid 2
I1222 23:09:59.392086  187042 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:49052 Rclunk tag 0
I1222 23:09:59.392192  187042 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:49052 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1222 23:09:59.392246  187042 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:49052 Rwalk tag 0 (20fa2d4 48539a17 '') 
I1222 23:09:59.392341  187042 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:49052 Tstat tag 0 fid 2
I1222 23:09:59.392420  187042 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:49052 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa2d4 48539a17 '') m 644 at 0 mt 1766444997 l 24 t 0 d 0 ext )
I1222 23:09:59.392528  187042 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:49052 Tstat tag 0 fid 2
I1222 23:09:59.392626  187042 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:49052 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa2d4 48539a17 '') m 644 at 0 mt 1766444997 l 24 t 0 d 0 ext )
I1222 23:09:59.392734  187042 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:49052 Tclunk tag 0 fid 2
I1222 23:09:59.392762  187042 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:49052 Rclunk tag 0
I1222 23:09:59.392875  187042 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:49052 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1222 23:09:59.392913  187042 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:49052 Rwalk tag 0 (20fa2d3 48539a17 '') 
I1222 23:09:59.393014  187042 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:49052 Tstat tag 0 fid 2
I1222 23:09:59.393091  187042 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:49052 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa2d3 48539a17 '') m 644 at 0 mt 1766444997 l 24 t 0 d 0 ext )
I1222 23:09:59.393193  187042 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:49052 Tstat tag 0 fid 2
I1222 23:09:59.393291  187042 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:49052 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa2d3 48539a17 '') m 644 at 0 mt 1766444997 l 24 t 0 d 0 ext )
I1222 23:09:59.393399  187042 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:49052 Tclunk tag 0 fid 2
I1222 23:09:59.393426  187042 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:49052 Rclunk tag 0
I1222 23:09:59.393545  187042 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:49052 Tread tag 0 fid 1 offset 258 count 262120
I1222 23:09:59.393577  187042 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:49052 Rread tag 0 count 0
I1222 23:09:59.393718  187042 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:49052 Tclunk tag 0 fid 1
I1222 23:09:59.393757  187042 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:49052 Rclunk tag 0
I1222 23:09:59.394765  187042 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:49052 Twalk tag 0 fid 0 newfid 1 0:'pod-dates' 
I1222 23:09:59.394815  187042 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:49052 Rerror tag 0 ename 'file not found' ecode 0
I1222 23:09:59.693398  187042 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:49052 Tclunk tag 0 fid 0
I1222 23:09:59.693458  187042 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:49052 Rclunk tag 0
I1222 23:09:59.694084  187042 main.go:127] stdlog: ufs.go:147 disconnected
I1222 23:09:59.735894  187042 out.go:179] * Unmounting /mount-9p ...
I1222 23:09:59.737074  187042 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1222 23:09:59.747706  187042 mount.go:180] unmount for /mount-9p ran successfully
I1222 23:09:59.747821  187042 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/.mount-process: {Name:mk79995f9ba51d8429a24cb058885a78af4a55bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1222 23:09:59.751179  187042 out.go:203] 
W1222 23:09:59.753151  187042 out.go:285] X Exiting due to MK_INTERRUPTED: Received terminated signal
X Exiting due to MK_INTERRUPTED: Received terminated signal
I1222 23:09:59.754213  187042 out.go:203] 
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port (2.69s)

                                                
                                    
x
+
TestKubernetesUpgrade (782.74s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-767823 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E1222 23:44:19.073495   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/skaffold-356784/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-767823 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (26.900719492s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-767823 --alsologtostderr
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-767823 --alsologtostderr: (10.984107305s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-767823 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-767823 status --format={{.Host}}: exit status 7 (76.228811ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-767823 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E1222 23:44:53.441111   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-767823 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: exit status 109 (12m20.904368299s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-767823] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22301
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22301-72233/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-72233/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "kubernetes-upgrade-767823" primary control-plane node in "kubernetes-upgrade-767823" cluster
	* Pulling base image v0.0.48-1766394456-22288 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1222 23:44:36.891137  479667 out.go:360] Setting OutFile to fd 1 ...
	I1222 23:44:36.891426  479667 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 23:44:36.891443  479667 out.go:374] Setting ErrFile to fd 2...
	I1222 23:44:36.891447  479667 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 23:44:36.891656  479667 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-72233/.minikube/bin
	I1222 23:44:36.892089  479667 out.go:368] Setting JSON to false
	I1222 23:44:36.893216  479667 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":12417,"bootTime":1766434660,"procs":271,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1222 23:44:36.893284  479667 start.go:143] virtualization: kvm guest
	I1222 23:44:36.895411  479667 out.go:179] * [kubernetes-upgrade-767823] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1222 23:44:36.896621  479667 out.go:179]   - MINIKUBE_LOCATION=22301
	I1222 23:44:36.896626  479667 notify.go:221] Checking for updates...
	I1222 23:44:36.899371  479667 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 23:44:36.900532  479667 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1222 23:44:36.901485  479667 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-72233/.minikube
	I1222 23:44:36.902520  479667 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1222 23:44:36.903420  479667 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 23:44:36.904680  479667 config.go:182] Loaded profile config "kubernetes-upgrade-767823": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I1222 23:44:36.905149  479667 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 23:44:36.930535  479667 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1222 23:44:36.930714  479667 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 23:44:36.990152  479667 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:true NGoroutines:75 SystemTime:2025-12-22 23:44:36.980036501 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1222 23:44:36.990263  479667 docker.go:319] overlay module found
	I1222 23:44:36.992076  479667 out.go:179] * Using the docker driver based on existing profile
	I1222 23:44:36.993233  479667 start.go:309] selected driver: docker
	I1222 23:44:36.993251  479667 start.go:928] validating driver "docker" against &{Name:kubernetes-upgrade-767823 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-767823 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 23:44:36.993360  479667 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 23:44:36.994261  479667 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 23:44:37.048814  479667 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:true NGoroutines:75 SystemTime:2025-12-22 23:44:37.038710572 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1222 23:44:37.049190  479667 cni.go:84] Creating CNI manager for ""
	I1222 23:44:37.049261  479667 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1222 23:44:37.049300  479667 start.go:353] cluster config:
	{Name:kubernetes-upgrade-767823 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:kubernetes-upgrade-767823 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 23:44:37.051173  479667 out.go:179] * Starting "kubernetes-upgrade-767823" primary control-plane node in "kubernetes-upgrade-767823" cluster
	I1222 23:44:37.052185  479667 cache.go:134] Beginning downloading kic base image for docker with docker
	I1222 23:44:37.053279  479667 out.go:179] * Pulling base image v0.0.48-1766394456-22288 ...
	I1222 23:44:37.054302  479667 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1222 23:44:37.054337  479667 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4
	I1222 23:44:37.054365  479667 cache.go:65] Caching tarball of preloaded images
	I1222 23:44:37.054406  479667 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local docker daemon
	I1222 23:44:37.054488  479667 preload.go:251] Found /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1222 23:44:37.054513  479667 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on docker
	I1222 23:44:37.054630  479667 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/kubernetes-upgrade-767823/config.json ...
	I1222 23:44:37.075057  479667 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local docker daemon, skipping pull
	I1222 23:44:37.075083  479667 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 exists in daemon, skipping load
	I1222 23:44:37.075103  479667 cache.go:243] Successfully downloaded all kic artifacts
	I1222 23:44:37.075140  479667 start.go:360] acquireMachinesLock for kubernetes-upgrade-767823: {Name:mk6e6f9dc770a4299afcd5b5a8e46efa7315541d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 23:44:37.075249  479667 start.go:364] duration metric: took 66.029µs to acquireMachinesLock for "kubernetes-upgrade-767823"
	I1222 23:44:37.075287  479667 start.go:96] Skipping create...Using existing machine configuration
	I1222 23:44:37.075298  479667 fix.go:54] fixHost starting: 
	I1222 23:44:37.075608  479667 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-767823 --format={{.State.Status}}
	I1222 23:44:37.095574  479667 fix.go:112] recreateIfNeeded on kubernetes-upgrade-767823: state=Stopped err=<nil>
	W1222 23:44:37.095633  479667 fix.go:138] unexpected machine state, will restart: <nil>
	I1222 23:44:37.097014  479667 out.go:252] * Restarting existing docker container for "kubernetes-upgrade-767823" ...
	I1222 23:44:37.097086  479667 cli_runner.go:164] Run: docker start kubernetes-upgrade-767823
	I1222 23:44:37.346833  479667 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-767823 --format={{.State.Status}}
	I1222 23:44:37.365283  479667 kic.go:430] container "kubernetes-upgrade-767823" state is running.
	I1222 23:44:37.365758  479667 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-767823
	I1222 23:44:37.386581  479667 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/kubernetes-upgrade-767823/config.json ...
	I1222 23:44:37.386853  479667 machine.go:94] provisionDockerMachine start ...
	I1222 23:44:37.386927  479667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-767823
	I1222 23:44:37.405899  479667 main.go:144] libmachine: Using SSH client type: native
	I1222 23:44:37.406238  479667 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1222 23:44:37.406261  479667 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 23:44:37.406907  479667 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41564->127.0.0.1:33063: read: connection reset by peer
	I1222 23:44:40.550625  479667 main.go:144] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-767823
	
	I1222 23:44:40.550658  479667 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-767823"
	I1222 23:44:40.550716  479667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-767823
	I1222 23:44:40.569411  479667 main.go:144] libmachine: Using SSH client type: native
	I1222 23:44:40.569682  479667 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1222 23:44:40.569699  479667 main.go:144] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-767823 && echo "kubernetes-upgrade-767823" | sudo tee /etc/hostname
	I1222 23:44:40.723246  479667 main.go:144] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-767823
	
	I1222 23:44:40.723350  479667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-767823
	I1222 23:44:40.748923  479667 main.go:144] libmachine: Using SSH client type: native
	I1222 23:44:40.749153  479667 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1222 23:44:40.749173  479667 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-767823' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-767823/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-767823' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 23:44:40.891823  479667 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 23:44:40.891851  479667 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22301-72233/.minikube CaCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22301-72233/.minikube}
	I1222 23:44:40.891889  479667 ubuntu.go:190] setting up certificates
	I1222 23:44:40.891903  479667 provision.go:84] configureAuth start
	I1222 23:44:40.891953  479667 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-767823
	I1222 23:44:40.909569  479667 provision.go:143] copyHostCerts
	I1222 23:44:40.909655  479667 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem, removing ...
	I1222 23:44:40.909679  479667 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem
	I1222 23:44:40.909762  479667 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem (1082 bytes)
	I1222 23:44:40.909922  479667 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem, removing ...
	I1222 23:44:40.909938  479667 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem
	I1222 23:44:40.909984  479667 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem (1123 bytes)
	I1222 23:44:40.910096  479667 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem, removing ...
	I1222 23:44:40.910109  479667 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem
	I1222 23:44:40.910151  479667 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem (1679 bytes)
	I1222 23:44:40.910242  479667 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-767823 san=[127.0.0.1 192.168.76.2 kubernetes-upgrade-767823 localhost minikube]
	I1222 23:44:40.953808  479667 provision.go:177] copyRemoteCerts
	I1222 23:44:40.953882  479667 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 23:44:40.953948  479667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-767823
	I1222 23:44:40.975144  479667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/kubernetes-upgrade-767823/id_rsa Username:docker}
	I1222 23:44:41.077433  479667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 23:44:41.097304  479667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1222 23:44:41.114582  479667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1222 23:44:41.132751  479667 provision.go:87] duration metric: took 240.832596ms to configureAuth
	I1222 23:44:41.132780  479667 ubuntu.go:206] setting minikube options for container-runtime
	I1222 23:44:41.132944  479667 config.go:182] Loaded profile config "kubernetes-upgrade-767823": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1222 23:44:41.132999  479667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-767823
	I1222 23:44:41.154383  479667 main.go:144] libmachine: Using SSH client type: native
	I1222 23:44:41.154692  479667 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1222 23:44:41.154711  479667 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1222 23:44:41.306078  479667 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1222 23:44:41.306103  479667 ubuntu.go:71] root file system type: overlay
	I1222 23:44:41.306245  479667 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1222 23:44:41.306315  479667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-767823
	I1222 23:44:41.330285  479667 main.go:144] libmachine: Using SSH client type: native
	I1222 23:44:41.330574  479667 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1222 23:44:41.330711  479667 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1222 23:44:41.488723  479667 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1222 23:44:41.488840  479667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-767823
	I1222 23:44:41.510264  479667 main.go:144] libmachine: Using SSH client type: native
	I1222 23:44:41.510521  479667 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1222 23:44:41.510546  479667 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1222 23:44:41.668705  479667 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 23:44:41.668732  479667 machine.go:97] duration metric: took 4.281860689s to provisionDockerMachine
	I1222 23:44:41.668748  479667 start.go:293] postStartSetup for "kubernetes-upgrade-767823" (driver="docker")
	I1222 23:44:41.668764  479667 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 23:44:41.668850  479667 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 23:44:41.668900  479667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-767823
	I1222 23:44:41.689880  479667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/kubernetes-upgrade-767823/id_rsa Username:docker}
	I1222 23:44:41.795800  479667 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 23:44:41.799494  479667 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 23:44:41.799519  479667 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 23:44:41.799530  479667 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-72233/.minikube/addons for local assets ...
	I1222 23:44:41.799643  479667 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-72233/.minikube/files for local assets ...
	I1222 23:44:41.799757  479667 filesync.go:149] local asset: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem -> 758032.pem in /etc/ssl/certs
	I1222 23:44:41.799876  479667 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1222 23:44:41.807727  479667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem --> /etc/ssl/certs/758032.pem (1708 bytes)
	I1222 23:44:41.825946  479667 start.go:296] duration metric: took 157.178362ms for postStartSetup
	I1222 23:44:41.826063  479667 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 23:44:41.826116  479667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-767823
	I1222 23:44:41.845179  479667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/kubernetes-upgrade-767823/id_rsa Username:docker}
	I1222 23:44:41.946106  479667 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 23:44:41.950977  479667 fix.go:56] duration metric: took 4.875672428s for fixHost
	I1222 23:44:41.951007  479667 start.go:83] releasing machines lock for "kubernetes-upgrade-767823", held for 4.87574127s
	I1222 23:44:41.951092  479667 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-767823
	I1222 23:44:41.969815  479667 ssh_runner.go:195] Run: cat /version.json
	I1222 23:44:41.969869  479667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-767823
	I1222 23:44:41.969892  479667 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 23:44:41.969980  479667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-767823
	I1222 23:44:41.990965  479667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/kubernetes-upgrade-767823/id_rsa Username:docker}
	I1222 23:44:41.991147  479667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/kubernetes-upgrade-767823/id_rsa Username:docker}
	I1222 23:44:42.090411  479667 ssh_runner.go:195] Run: systemctl --version
	I1222 23:44:42.155561  479667 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 23:44:42.160384  479667 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 23:44:42.160464  479667 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 23:44:42.168788  479667 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1222 23:44:42.168818  479667 start.go:496] detecting cgroup driver to use...
	I1222 23:44:42.168851  479667 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 23:44:42.168968  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 23:44:42.183397  479667 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1222 23:44:42.192809  479667 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1222 23:44:42.202231  479667 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1222 23:44:42.202288  479667 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1222 23:44:42.213115  479667 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 23:44:42.222025  479667 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1222 23:44:42.231672  479667 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 23:44:42.240966  479667 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 23:44:42.249460  479667 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1222 23:44:42.258122  479667 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1222 23:44:42.268393  479667 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1222 23:44:42.280108  479667 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 23:44:42.289359  479667 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 23:44:42.298553  479667 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 23:44:42.407658  479667 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1222 23:44:42.483297  479667 start.go:496] detecting cgroup driver to use...
	I1222 23:44:42.483349  479667 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 23:44:42.483414  479667 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1222 23:44:42.498220  479667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 23:44:42.510403  479667 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1222 23:44:42.526986  479667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 23:44:42.539801  479667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1222 23:44:42.552746  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 23:44:42.566694  479667 ssh_runner.go:195] Run: which cri-dockerd
	I1222 23:44:42.570565  479667 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1222 23:44:42.578183  479667 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1222 23:44:42.590798  479667 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1222 23:44:42.684650  479667 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1222 23:44:42.773587  479667 docker.go:578] configuring docker to use "cgroupfs" as cgroup driver...
	I1222 23:44:42.773729  479667 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1222 23:44:42.786589  479667 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1222 23:44:42.798693  479667 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 23:44:42.880923  479667 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1222 23:44:43.695370  479667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 23:44:43.707908  479667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1222 23:44:43.719720  479667 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1222 23:44:43.733359  479667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1222 23:44:43.746558  479667 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1222 23:44:43.834382  479667 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1222 23:44:43.916973  479667 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 23:44:44.002246  479667 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1222 23:44:44.027734  479667 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1222 23:44:44.040398  479667 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 23:44:44.124954  479667 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1222 23:44:44.205497  479667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1222 23:44:44.218538  479667 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1222 23:44:44.218689  479667 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1222 23:44:44.222632  479667 start.go:564] Will wait 60s for crictl version
	I1222 23:44:44.222759  479667 ssh_runner.go:195] Run: which crictl
	I1222 23:44:44.226269  479667 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 23:44:44.251807  479667 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1222 23:44:44.251866  479667 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1222 23:44:44.278455  479667 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1222 23:44:44.306718  479667 out.go:252] * Preparing Kubernetes v1.35.0-rc.1 on Docker 29.1.3 ...
	I1222 23:44:44.306802  479667 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-767823 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 23:44:44.326482  479667 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1222 23:44:44.330783  479667 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 23:44:44.342076  479667 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-767823 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:kubernetes-upgrade-767823 Namespace:default APIServerHAVIP: APIServ
erName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1222 23:44:44.342184  479667 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1222 23:44:44.342234  479667 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1222 23:44:44.365436  479667 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.0
	registry.k8s.io/kube-scheduler:v1.28.0
	registry.k8s.io/kube-controller-manager:v1.28.0
	registry.k8s.io/kube-proxy:v1.28.0
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1222 23:44:44.365460  479667 docker.go:700] registry.k8s.io/kube-apiserver:v1.35.0-rc.1 wasn't preloaded
	I1222 23:44:44.365522  479667 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1222 23:44:44.373882  479667 ssh_runner.go:195] Run: which lz4
	I1222 23:44:44.378753  479667 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1222 23:44:44.382811  479667 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1222 23:44:44.382835  479667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (284645196 bytes)
	I1222 23:44:45.077983  479667 docker.go:658] duration metric: took 699.259964ms to copy over tarball
	I1222 23:44:45.078070  479667 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1222 23:44:46.503174  479667 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.425049515s)
	I1222 23:44:46.503208  479667 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1222 23:44:46.554825  479667 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1222 23:44:46.562928  479667 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2652 bytes)
	I1222 23:44:46.575258  479667 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1222 23:44:46.586916  479667 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 23:44:46.666954  479667 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1222 23:44:48.990747  479667 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.323744023s)
	I1222 23:44:48.990854  479667 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1222 23:44:49.018378  479667 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1222 23:44:49.018407  479667 cache_images.go:86] Images are preloaded, skipping loading
	I1222 23:44:49.018419  479667 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 docker true true} ...
	I1222 23:44:49.018566  479667 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-767823 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:kubernetes-upgrade-767823 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 23:44:49.018654  479667 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1222 23:44:49.079406  479667 cni.go:84] Creating CNI manager for ""
	I1222 23:44:49.079440  479667 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1222 23:44:49.079465  479667 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 23:44:49.079490  479667 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-767823 NodeName:kubernetes-upgrade-767823 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 23:44:49.079699  479667 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "kubernetes-upgrade-767823"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 23:44:49.079787  479667 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 23:44:49.089135  479667 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 23:44:49.089198  479667 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 23:44:49.098215  479667 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (329 bytes)
	I1222 23:44:49.110372  479667 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 23:44:49.122004  479667 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2231 bytes)
	I1222 23:44:49.134106  479667 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1222 23:44:49.137727  479667 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 23:44:49.150222  479667 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 23:44:49.243231  479667 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 23:44:49.275870  479667 certs.go:69] Setting up /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/kubernetes-upgrade-767823 for IP: 192.168.76.2
	I1222 23:44:49.275896  479667 certs.go:195] generating shared ca certs ...
	I1222 23:44:49.275920  479667 certs.go:227] acquiring lock for ca certs: {Name:mk952cc8302daab7c0050aedd5db4002f6808128 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 23:44:49.276096  479667 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.key
	I1222 23:44:49.276160  479667 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.key
	I1222 23:44:49.276172  479667 certs.go:257] generating profile certs ...
	I1222 23:44:49.276296  479667 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/kubernetes-upgrade-767823/client.key
	I1222 23:44:49.276371  479667 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/kubernetes-upgrade-767823/apiserver.key.80441291
	I1222 23:44:49.276436  479667 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/kubernetes-upgrade-767823/proxy-client.key
	I1222 23:44:49.276579  479667 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803.pem (1338 bytes)
	W1222 23:44:49.276644  479667 certs.go:480] ignoring /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803_empty.pem, impossibly tiny 0 bytes
	I1222 23:44:49.276659  479667 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem (1675 bytes)
	I1222 23:44:49.276698  479667 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem (1082 bytes)
	I1222 23:44:49.276751  479667 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem (1123 bytes)
	I1222 23:44:49.276791  479667 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem (1679 bytes)
	I1222 23:44:49.276856  479667 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem (1708 bytes)
	I1222 23:44:49.277672  479667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 23:44:49.298836  479667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 23:44:49.316788  479667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 23:44:49.333956  479667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 23:44:49.351396  479667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/kubernetes-upgrade-767823/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1222 23:44:49.376010  479667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/kubernetes-upgrade-767823/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1222 23:44:49.394815  479667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/kubernetes-upgrade-767823/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 23:44:49.411988  479667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/kubernetes-upgrade-767823/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1222 23:44:49.428269  479667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 23:44:49.445785  479667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803.pem --> /usr/share/ca-certificates/75803.pem (1338 bytes)
	I1222 23:44:49.462672  479667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem --> /usr/share/ca-certificates/758032.pem (1708 bytes)
	I1222 23:44:49.479154  479667 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 23:44:49.491164  479667 ssh_runner.go:195] Run: openssl version
	I1222 23:44:49.497091  479667 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 23:44:49.504379  479667 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 23:44:49.511301  479667 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 23:44:49.514777  479667 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 22:33 /usr/share/ca-certificates/minikubeCA.pem
	I1222 23:44:49.514822  479667 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 23:44:49.549476  479667 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 23:44:49.557754  479667 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/75803.pem
	I1222 23:44:49.565175  479667 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/75803.pem /etc/ssl/certs/75803.pem
	I1222 23:44:49.573032  479667 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75803.pem
	I1222 23:44:49.577024  479667 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 22:42 /usr/share/ca-certificates/75803.pem
	I1222 23:44:49.577076  479667 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75803.pem
	I1222 23:44:49.611121  479667 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 23:44:49.618522  479667 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/758032.pem
	I1222 23:44:49.625568  479667 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/758032.pem /etc/ssl/certs/758032.pem
	I1222 23:44:49.632876  479667 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/758032.pem
	I1222 23:44:49.636610  479667 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 22:42 /usr/share/ca-certificates/758032.pem
	I1222 23:44:49.636661  479667 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/758032.pem
	I1222 23:44:49.670890  479667 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 23:44:49.678351  479667 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 23:44:49.682145  479667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1222 23:44:49.716014  479667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1222 23:44:49.750263  479667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1222 23:44:49.785908  479667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1222 23:44:49.821006  479667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1222 23:44:49.855013  479667 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1222 23:44:49.891840  479667 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-767823 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:kubernetes-upgrade-767823 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 23:44:49.891982  479667 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1222 23:44:49.914577  479667 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 23:44:49.922561  479667 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1222 23:44:49.922575  479667 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1222 23:44:49.922627  479667 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1222 23:44:49.930861  479667 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1222 23:44:49.931427  479667 kubeconfig.go:47] verify endpoint returned: get endpoint: "kubernetes-upgrade-767823" does not appear in /home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1222 23:44:49.931764  479667 kubeconfig.go:62] /home/jenkins/minikube-integration/22301-72233/kubeconfig needs updating (will repair): [kubeconfig missing "kubernetes-upgrade-767823" cluster setting kubeconfig missing "kubernetes-upgrade-767823" context setting]
	I1222 23:44:49.932217  479667 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/kubeconfig: {Name:mkabb5ea92c3fe748f610038efb5c58128364c71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 23:44:49.932918  479667 kapi.go:59] client config for kubernetes-upgrade-767823: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22301-72233/.minikube/profiles/kubernetes-upgrade-767823/client.crt", KeyFile:"/home/jenkins/minikube-integration/22301-72233/.minikube/profiles/kubernetes-upgrade-767823/client.key", CAFile:"/home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2765fe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1222 23:44:49.933304  479667 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1222 23:44:49.933323  479667 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=true
	I1222 23:44:49.933327  479667 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=true
	I1222 23:44:49.933331  479667 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=true
	I1222 23:44:49.933335  479667 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1222 23:44:49.933342  479667 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1222 23:44:49.933692  479667 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1222 23:44:49.942120  479667 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-22 23:44:14.016621380 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-22 23:44:49.132016768 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.76.2
	@@ -14,31 +14,34 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "kubernetes-upgrade-767823"
	   kubeletExtraArgs:
	-    node-ip: 192.168.76.2
	+    - name: "node-ip"
	+      value: "192.168.76.2"
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    - name: "enable-admission-plugins"
	+      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	   extraArgs:
	-    allocate-node-cidrs: "true"
	-    leader-elect: "false"
	+    - name: "allocate-node-cidrs"
	+      value: "true"
	+    - name: "leader-elect"
	+      value: "false"
	 scheduler:
	   extraArgs:
	-    leader-elect: "false"
	+    - name: "leader-elect"
	+      value: "false"
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.28.0
	+kubernetesVersion: v1.35.0-rc.1
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1222 23:44:49.942136  479667 kubeadm.go:1161] stopping kube-system containers ...
	I1222 23:44:49.942181  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1222 23:44:49.965038  479667 docker.go:487] Stopping containers: [5fdee4dffac9 d0cd4466a37f 4db369cb7727 eec44bb2b99c 8e308071b915 cd71e22eed83 aa3b2a85357e a73fb46017f8]
	I1222 23:44:49.965103  479667 ssh_runner.go:195] Run: docker stop 5fdee4dffac9 d0cd4466a37f 4db369cb7727 eec44bb2b99c 8e308071b915 cd71e22eed83 aa3b2a85357e a73fb46017f8
	I1222 23:44:49.988494  479667 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1222 23:44:50.007008  479667 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 23:44:50.015306  479667 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5639 Dec 22 23:44 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Dec 22 23:44 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Dec 22 23:44 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Dec 22 23:44 /etc/kubernetes/scheduler.conf
	
	I1222 23:44:50.015373  479667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1222 23:44:50.023320  479667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1222 23:44:50.031085  479667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1222 23:44:50.038331  479667 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1222 23:44:50.038387  479667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 23:44:50.045565  479667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1222 23:44:50.052841  479667 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1222 23:44:50.052892  479667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 23:44:50.059954  479667 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 23:44:50.067424  479667 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1222 23:44:50.111566  479667 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1222 23:44:50.877311  479667 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1222 23:44:51.085837  479667 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1222 23:44:51.148394  479667 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1222 23:44:51.200412  479667 api_server.go:52] waiting for apiserver process to appear ...
	I1222 23:44:51.200502  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:44:51.701736  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:44:52.200819  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:44:52.701275  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:44:53.200726  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:44:53.700755  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:44:54.201281  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:44:54.700785  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:44:55.200857  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:44:55.701093  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:44:56.201551  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:44:56.700798  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:44:57.201234  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:44:57.701353  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:44:58.200879  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:44:58.700799  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:44:59.200689  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:44:59.700922  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:00.201041  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:00.701666  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:01.200635  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:01.700774  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:02.201136  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:02.701033  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:03.201305  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:03.701453  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:04.200689  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:04.700657  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:05.201184  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:05.700739  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:06.200699  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:06.701220  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:07.201626  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:07.700677  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:08.200815  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:08.701044  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:09.201643  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:09.700619  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:10.200736  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:10.701359  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:11.200791  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:11.700580  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:12.201237  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:12.700855  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:13.201569  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:13.700736  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:14.200724  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:14.700714  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:15.201616  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:15.700868  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:16.200950  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:16.701538  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:17.201438  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:17.701642  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:18.201276  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:18.700939  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:19.200661  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:19.700870  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:20.201214  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:20.701555  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:21.200705  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:21.701429  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:22.201242  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:22.700743  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:23.200995  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:23.701423  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:24.200748  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:24.701191  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:25.201270  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:25.701033  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:26.201554  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:26.700937  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:27.201289  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:27.701141  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:28.200706  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:28.701084  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:29.201338  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:29.700689  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:30.201168  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:30.700853  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:31.200693  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:31.700726  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:32.201413  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:32.701557  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:33.200758  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:33.700870  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:34.201420  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:34.701619  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:35.200694  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:35.701520  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:36.200577  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:36.700563  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:37.201507  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:37.700801  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:38.201051  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:38.701234  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:39.200620  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:39.700958  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:40.201235  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:40.700748  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:41.200754  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:41.701536  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:42.201253  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:42.701482  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:43.200729  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:43.701495  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:44.200725  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:44.700728  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:45.201218  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:45.700740  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:46.200855  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:46.701547  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:47.201290  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:47.701472  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:48.201516  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:48.701249  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:49.201449  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:49.700673  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:50.202727  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:50.700832  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:51.200734  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:45:51.220835  479667 logs.go:282] 1 containers: [4db369cb7727]
	I1222 23:45:51.220900  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:45:51.241778  479667 logs.go:282] 1 containers: [eec44bb2b99c]
	I1222 23:45:51.241851  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:45:51.264101  479667 logs.go:282] 0 containers: []
	W1222 23:45:51.264124  479667 logs.go:284] No container was found matching "coredns"
	I1222 23:45:51.264169  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:45:51.284683  479667 logs.go:282] 1 containers: [5fdee4dffac9]
	I1222 23:45:51.284803  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:45:51.306527  479667 logs.go:282] 0 containers: []
	W1222 23:45:51.306547  479667 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:45:51.306616  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:45:51.325890  479667 logs.go:282] 1 containers: [d0cd4466a37f]
	I1222 23:45:51.325966  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:45:51.345734  479667 logs.go:282] 0 containers: []
	W1222 23:45:51.345763  479667 logs.go:284] No container was found matching "kindnet"
	I1222 23:45:51.345818  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1222 23:45:51.365452  479667 logs.go:282] 0 containers: []
	W1222 23:45:51.365477  479667 logs.go:284] No container was found matching "storage-provisioner"
	I1222 23:45:51.365500  479667 logs.go:123] Gathering logs for kube-controller-manager [d0cd4466a37f] ...
	I1222 23:45:51.365514  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0cd4466a37f"
	I1222 23:45:51.396207  479667 logs.go:123] Gathering logs for container status ...
	I1222 23:45:51.396236  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:45:51.438803  479667 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:45:51.438835  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:45:51.496326  479667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:45:51.496346  479667 logs.go:123] Gathering logs for kube-scheduler [5fdee4dffac9] ...
	I1222 23:45:51.496360  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fdee4dffac9"
	I1222 23:45:51.526260  479667 logs.go:123] Gathering logs for Docker ...
	I1222 23:45:51.526292  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:45:51.548094  479667 logs.go:123] Gathering logs for kubelet ...
	I1222 23:45:51.548123  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:45:51.608140  479667 logs.go:123] Gathering logs for dmesg ...
	I1222 23:45:51.608167  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:45:51.627449  479667 logs.go:123] Gathering logs for kube-apiserver [4db369cb7727] ...
	I1222 23:45:51.627483  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4db369cb7727"
	I1222 23:45:51.669570  479667 logs.go:123] Gathering logs for etcd [eec44bb2b99c] ...
	I1222 23:45:51.669615  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eec44bb2b99c"
	I1222 23:45:54.201130  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:54.214050  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:45:54.236035  479667 logs.go:282] 1 containers: [4db369cb7727]
	I1222 23:45:54.236122  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:45:54.257777  479667 logs.go:282] 1 containers: [eec44bb2b99c]
	I1222 23:45:54.257856  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:45:54.283015  479667 logs.go:282] 0 containers: []
	W1222 23:45:54.283047  479667 logs.go:284] No container was found matching "coredns"
	I1222 23:45:54.283107  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:45:54.306335  479667 logs.go:282] 1 containers: [5fdee4dffac9]
	I1222 23:45:54.306423  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:45:54.331555  479667 logs.go:282] 0 containers: []
	W1222 23:45:54.331582  479667 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:45:54.331741  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:45:54.353133  479667 logs.go:282] 1 containers: [d0cd4466a37f]
	I1222 23:45:54.353199  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:45:54.375442  479667 logs.go:282] 0 containers: []
	W1222 23:45:54.375467  479667 logs.go:284] No container was found matching "kindnet"
	I1222 23:45:54.375523  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1222 23:45:54.398951  479667 logs.go:282] 0 containers: []
	W1222 23:45:54.398979  479667 logs.go:284] No container was found matching "storage-provisioner"
	I1222 23:45:54.398998  479667 logs.go:123] Gathering logs for kubelet ...
	I1222 23:45:54.399012  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:45:54.460439  479667 logs.go:123] Gathering logs for dmesg ...
	I1222 23:45:54.460471  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:45:54.480810  479667 logs.go:123] Gathering logs for kube-scheduler [5fdee4dffac9] ...
	I1222 23:45:54.480849  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fdee4dffac9"
	I1222 23:45:54.517628  479667 logs.go:123] Gathering logs for kube-controller-manager [d0cd4466a37f] ...
	I1222 23:45:54.517667  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0cd4466a37f"
	I1222 23:45:54.554082  479667 logs.go:123] Gathering logs for Docker ...
	I1222 23:45:54.554125  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:45:54.586183  479667 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:45:54.586223  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:45:54.655870  479667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:45:54.655899  479667 logs.go:123] Gathering logs for kube-apiserver [4db369cb7727] ...
	I1222 23:45:54.655934  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4db369cb7727"
	I1222 23:45:54.702758  479667 logs.go:123] Gathering logs for etcd [eec44bb2b99c] ...
	I1222 23:45:54.702799  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eec44bb2b99c"
	I1222 23:45:54.738270  479667 logs.go:123] Gathering logs for container status ...
	I1222 23:45:54.738307  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:45:57.272078  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:45:57.285423  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:45:57.306616  479667 logs.go:282] 1 containers: [4db369cb7727]
	I1222 23:45:57.306698  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:45:57.326022  479667 logs.go:282] 1 containers: [eec44bb2b99c]
	I1222 23:45:57.326100  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:45:57.345525  479667 logs.go:282] 0 containers: []
	W1222 23:45:57.345549  479667 logs.go:284] No container was found matching "coredns"
	I1222 23:45:57.345624  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:45:57.365694  479667 logs.go:282] 1 containers: [5fdee4dffac9]
	I1222 23:45:57.365761  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:45:57.385995  479667 logs.go:282] 0 containers: []
	W1222 23:45:57.386022  479667 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:45:57.386073  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:45:57.407293  479667 logs.go:282] 1 containers: [d0cd4466a37f]
	I1222 23:45:57.407372  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:45:57.427914  479667 logs.go:282] 0 containers: []
	W1222 23:45:57.427938  479667 logs.go:284] No container was found matching "kindnet"
	I1222 23:45:57.427993  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1222 23:45:57.447368  479667 logs.go:282] 0 containers: []
	W1222 23:45:57.447398  479667 logs.go:284] No container was found matching "storage-provisioner"
	I1222 23:45:57.447419  479667 logs.go:123] Gathering logs for kubelet ...
	I1222 23:45:57.447448  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:45:57.501263  479667 logs.go:123] Gathering logs for dmesg ...
	I1222 23:45:57.501292  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:45:57.519088  479667 logs.go:123] Gathering logs for etcd [eec44bb2b99c] ...
	I1222 23:45:57.519119  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eec44bb2b99c"
	I1222 23:45:57.547911  479667 logs.go:123] Gathering logs for kube-scheduler [5fdee4dffac9] ...
	I1222 23:45:57.547940  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fdee4dffac9"
	I1222 23:45:57.578552  479667 logs.go:123] Gathering logs for kube-controller-manager [d0cd4466a37f] ...
	I1222 23:45:57.578586  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0cd4466a37f"
	I1222 23:45:57.615776  479667 logs.go:123] Gathering logs for Docker ...
	I1222 23:45:57.615809  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:45:57.644404  479667 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:45:57.644434  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:45:57.706955  479667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:45:57.706985  479667 logs.go:123] Gathering logs for kube-apiserver [4db369cb7727] ...
	I1222 23:45:57.707001  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4db369cb7727"
	I1222 23:45:57.748344  479667 logs.go:123] Gathering logs for container status ...
	I1222 23:45:57.748385  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:46:00.283509  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:46:00.298142  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:46:00.322448  479667 logs.go:282] 1 containers: [4db369cb7727]
	I1222 23:46:00.322528  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:46:00.343020  479667 logs.go:282] 1 containers: [eec44bb2b99c]
	I1222 23:46:00.343102  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:46:00.365237  479667 logs.go:282] 0 containers: []
	W1222 23:46:00.365259  479667 logs.go:284] No container was found matching "coredns"
	I1222 23:46:00.365311  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:46:00.388182  479667 logs.go:282] 1 containers: [5fdee4dffac9]
	I1222 23:46:00.388269  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:46:00.412398  479667 logs.go:282] 0 containers: []
	W1222 23:46:00.412428  479667 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:46:00.412483  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:46:00.435014  479667 logs.go:282] 1 containers: [d0cd4466a37f]
	I1222 23:46:00.435077  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:46:00.456510  479667 logs.go:282] 0 containers: []
	W1222 23:46:00.456536  479667 logs.go:284] No container was found matching "kindnet"
	I1222 23:46:00.456620  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1222 23:46:00.479549  479667 logs.go:282] 0 containers: []
	W1222 23:46:00.479577  479667 logs.go:284] No container was found matching "storage-provisioner"
	I1222 23:46:00.479613  479667 logs.go:123] Gathering logs for kube-scheduler [5fdee4dffac9] ...
	I1222 23:46:00.479628  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fdee4dffac9"
	I1222 23:46:00.514322  479667 logs.go:123] Gathering logs for container status ...
	I1222 23:46:00.514354  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:46:00.550260  479667 logs.go:123] Gathering logs for kubelet ...
	I1222 23:46:00.550294  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:46:00.619773  479667 logs.go:123] Gathering logs for dmesg ...
	I1222 23:46:00.619818  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:46:00.640263  479667 logs.go:123] Gathering logs for etcd [eec44bb2b99c] ...
	I1222 23:46:00.640297  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eec44bb2b99c"
	I1222 23:46:00.675446  479667 logs.go:123] Gathering logs for kube-controller-manager [d0cd4466a37f] ...
	I1222 23:46:00.675497  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0cd4466a37f"
	I1222 23:46:00.709943  479667 logs.go:123] Gathering logs for Docker ...
	I1222 23:46:00.709972  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:46:00.736577  479667 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:46:00.736623  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:46:00.805017  479667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:46:00.805044  479667 logs.go:123] Gathering logs for kube-apiserver [4db369cb7727] ...
	I1222 23:46:00.805061  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4db369cb7727"
	I1222 23:46:03.344198  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:46:03.357472  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:46:03.379609  479667 logs.go:282] 1 containers: [4db369cb7727]
	I1222 23:46:03.379721  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:46:03.402291  479667 logs.go:282] 1 containers: [eec44bb2b99c]
	I1222 23:46:03.402369  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:46:03.425460  479667 logs.go:282] 0 containers: []
	W1222 23:46:03.425486  479667 logs.go:284] No container was found matching "coredns"
	I1222 23:46:03.425545  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:46:03.448407  479667 logs.go:282] 1 containers: [5fdee4dffac9]
	I1222 23:46:03.448501  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:46:03.471170  479667 logs.go:282] 0 containers: []
	W1222 23:46:03.471200  479667 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:46:03.471260  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:46:03.493962  479667 logs.go:282] 1 containers: [d0cd4466a37f]
	I1222 23:46:03.494043  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:46:03.515510  479667 logs.go:282] 0 containers: []
	W1222 23:46:03.515538  479667 logs.go:284] No container was found matching "kindnet"
	I1222 23:46:03.515630  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1222 23:46:03.538315  479667 logs.go:282] 0 containers: []
	W1222 23:46:03.538341  479667 logs.go:284] No container was found matching "storage-provisioner"
	I1222 23:46:03.538360  479667 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:46:03.538377  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:46:03.610066  479667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:46:03.610092  479667 logs.go:123] Gathering logs for kube-apiserver [4db369cb7727] ...
	I1222 23:46:03.610109  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4db369cb7727"
	I1222 23:46:03.653108  479667 logs.go:123] Gathering logs for etcd [eec44bb2b99c] ...
	I1222 23:46:03.653139  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eec44bb2b99c"
	I1222 23:46:03.689927  479667 logs.go:123] Gathering logs for kube-scheduler [5fdee4dffac9] ...
	I1222 23:46:03.689963  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fdee4dffac9"
	I1222 23:46:03.724572  479667 logs.go:123] Gathering logs for container status ...
	I1222 23:46:03.724618  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:46:03.757979  479667 logs.go:123] Gathering logs for kubelet ...
	I1222 23:46:03.758013  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:46:03.822252  479667 logs.go:123] Gathering logs for dmesg ...
	I1222 23:46:03.822283  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:46:03.841391  479667 logs.go:123] Gathering logs for kube-controller-manager [d0cd4466a37f] ...
	I1222 23:46:03.841416  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0cd4466a37f"
	I1222 23:46:03.875610  479667 logs.go:123] Gathering logs for Docker ...
	I1222 23:46:03.875645  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:46:06.402219  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:46:06.413640  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:46:06.433731  479667 logs.go:282] 1 containers: [4db369cb7727]
	I1222 23:46:06.433801  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:46:06.453019  479667 logs.go:282] 1 containers: [eec44bb2b99c]
	I1222 23:46:06.453090  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:46:06.472513  479667 logs.go:282] 0 containers: []
	W1222 23:46:06.472538  479667 logs.go:284] No container was found matching "coredns"
	I1222 23:46:06.472601  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:46:06.492380  479667 logs.go:282] 1 containers: [5fdee4dffac9]
	I1222 23:46:06.492462  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:46:06.511633  479667 logs.go:282] 0 containers: []
	W1222 23:46:06.511657  479667 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:46:06.511726  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:46:06.530278  479667 logs.go:282] 1 containers: [d0cd4466a37f]
	I1222 23:46:06.530342  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:46:06.550348  479667 logs.go:282] 0 containers: []
	W1222 23:46:06.550368  479667 logs.go:284] No container was found matching "kindnet"
	I1222 23:46:06.550431  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1222 23:46:06.571471  479667 logs.go:282] 0 containers: []
	W1222 23:46:06.571498  479667 logs.go:284] No container was found matching "storage-provisioner"
	I1222 23:46:06.571523  479667 logs.go:123] Gathering logs for kubelet ...
	I1222 23:46:06.571538  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:46:06.633546  479667 logs.go:123] Gathering logs for dmesg ...
	I1222 23:46:06.633577  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:46:06.652403  479667 logs.go:123] Gathering logs for etcd [eec44bb2b99c] ...
	I1222 23:46:06.652432  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eec44bb2b99c"
	I1222 23:46:06.684152  479667 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:46:06.684184  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:46:06.740489  479667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:46:06.740509  479667 logs.go:123] Gathering logs for kube-apiserver [4db369cb7727] ...
	I1222 23:46:06.740524  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4db369cb7727"
	I1222 23:46:06.773988  479667 logs.go:123] Gathering logs for kube-scheduler [5fdee4dffac9] ...
	I1222 23:46:06.774018  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fdee4dffac9"
	I1222 23:46:06.803558  479667 logs.go:123] Gathering logs for kube-controller-manager [d0cd4466a37f] ...
	I1222 23:46:06.803586  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0cd4466a37f"
	I1222 23:46:06.833204  479667 logs.go:123] Gathering logs for Docker ...
	I1222 23:46:06.833230  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:46:06.858120  479667 logs.go:123] Gathering logs for container status ...
	I1222 23:46:06.858151  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:46:09.389523  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:46:09.400930  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:46:09.420424  479667 logs.go:282] 1 containers: [4db369cb7727]
	I1222 23:46:09.420490  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:46:09.439263  479667 logs.go:282] 1 containers: [eec44bb2b99c]
	I1222 23:46:09.439328  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:46:09.457615  479667 logs.go:282] 0 containers: []
	W1222 23:46:09.457636  479667 logs.go:284] No container was found matching "coredns"
	I1222 23:46:09.457686  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:46:09.479513  479667 logs.go:282] 1 containers: [5fdee4dffac9]
	I1222 23:46:09.479615  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:46:09.498948  479667 logs.go:282] 0 containers: []
	W1222 23:46:09.498973  479667 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:46:09.499027  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:46:09.517373  479667 logs.go:282] 1 containers: [d0cd4466a37f]
	I1222 23:46:09.517447  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:46:09.536662  479667 logs.go:282] 0 containers: []
	W1222 23:46:09.536690  479667 logs.go:284] No container was found matching "kindnet"
	I1222 23:46:09.536734  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1222 23:46:09.554492  479667 logs.go:282] 0 containers: []
	W1222 23:46:09.554516  479667 logs.go:284] No container was found matching "storage-provisioner"
	I1222 23:46:09.554535  479667 logs.go:123] Gathering logs for kube-scheduler [5fdee4dffac9] ...
	I1222 23:46:09.554549  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fdee4dffac9"
	I1222 23:46:09.582303  479667 logs.go:123] Gathering logs for kube-controller-manager [d0cd4466a37f] ...
	I1222 23:46:09.582333  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0cd4466a37f"
	I1222 23:46:09.611900  479667 logs.go:123] Gathering logs for container status ...
	I1222 23:46:09.611937  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:46:09.645430  479667 logs.go:123] Gathering logs for kubelet ...
	I1222 23:46:09.645456  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:46:09.697150  479667 logs.go:123] Gathering logs for etcd [eec44bb2b99c] ...
	I1222 23:46:09.697184  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eec44bb2b99c"
	I1222 23:46:09.725876  479667 logs.go:123] Gathering logs for Docker ...
	I1222 23:46:09.725915  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:46:09.747850  479667 logs.go:123] Gathering logs for dmesg ...
	I1222 23:46:09.747881  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:46:09.767192  479667 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:46:09.767231  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:46:09.826804  479667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:46:09.826822  479667 logs.go:123] Gathering logs for kube-apiserver [4db369cb7727] ...
	I1222 23:46:09.826836  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4db369cb7727"
	I1222 23:46:12.361291  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:46:12.372912  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:46:12.393310  479667 logs.go:282] 1 containers: [4db369cb7727]
	I1222 23:46:12.393381  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:46:12.413300  479667 logs.go:282] 1 containers: [eec44bb2b99c]
	I1222 23:46:12.413394  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:46:12.440027  479667 logs.go:282] 0 containers: []
	W1222 23:46:12.440052  479667 logs.go:284] No container was found matching "coredns"
	I1222 23:46:12.440099  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:46:12.463063  479667 logs.go:282] 1 containers: [5fdee4dffac9]
	I1222 23:46:12.463152  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:46:12.483242  479667 logs.go:282] 0 containers: []
	W1222 23:46:12.483273  479667 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:46:12.483329  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:46:12.504242  479667 logs.go:282] 1 containers: [d0cd4466a37f]
	I1222 23:46:12.504355  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:46:12.524305  479667 logs.go:282] 0 containers: []
	W1222 23:46:12.524334  479667 logs.go:284] No container was found matching "kindnet"
	I1222 23:46:12.524393  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1222 23:46:12.545030  479667 logs.go:282] 0 containers: []
	W1222 23:46:12.545061  479667 logs.go:284] No container was found matching "storage-provisioner"
	I1222 23:46:12.545080  479667 logs.go:123] Gathering logs for etcd [eec44bb2b99c] ...
	I1222 23:46:12.545094  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eec44bb2b99c"
	I1222 23:46:12.575738  479667 logs.go:123] Gathering logs for kube-controller-manager [d0cd4466a37f] ...
	I1222 23:46:12.575781  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0cd4466a37f"
	I1222 23:46:12.612809  479667 logs.go:123] Gathering logs for Docker ...
	I1222 23:46:12.612844  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:46:12.635560  479667 logs.go:123] Gathering logs for container status ...
	I1222 23:46:12.635611  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:46:12.673809  479667 logs.go:123] Gathering logs for dmesg ...
	I1222 23:46:12.673857  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:46:12.695443  479667 logs.go:123] Gathering logs for kube-apiserver [4db369cb7727] ...
	I1222 23:46:12.695475  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4db369cb7727"
	I1222 23:46:12.729421  479667 logs.go:123] Gathering logs for kube-scheduler [5fdee4dffac9] ...
	I1222 23:46:12.729460  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fdee4dffac9"
	I1222 23:46:12.759154  479667 logs.go:123] Gathering logs for kubelet ...
	I1222 23:46:12.759188  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:46:12.828023  479667 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:46:12.828055  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:46:12.895814  479667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:46:15.397469  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:46:15.408772  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:46:15.427906  479667 logs.go:282] 1 containers: [4db369cb7727]
	I1222 23:46:15.427980  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:46:15.447038  479667 logs.go:282] 1 containers: [eec44bb2b99c]
	I1222 23:46:15.447127  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:46:15.466513  479667 logs.go:282] 0 containers: []
	W1222 23:46:15.466546  479667 logs.go:284] No container was found matching "coredns"
	I1222 23:46:15.466637  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:46:15.486493  479667 logs.go:282] 1 containers: [5fdee4dffac9]
	I1222 23:46:15.486587  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:46:15.505967  479667 logs.go:282] 0 containers: []
	W1222 23:46:15.505992  479667 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:46:15.506048  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:46:15.524673  479667 logs.go:282] 1 containers: [d0cd4466a37f]
	I1222 23:46:15.524757  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:46:15.542855  479667 logs.go:282] 0 containers: []
	W1222 23:46:15.542879  479667 logs.go:284] No container was found matching "kindnet"
	I1222 23:46:15.542936  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1222 23:46:15.561172  479667 logs.go:282] 0 containers: []
	W1222 23:46:15.561199  479667 logs.go:284] No container was found matching "storage-provisioner"
	I1222 23:46:15.561218  479667 logs.go:123] Gathering logs for Docker ...
	I1222 23:46:15.561229  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:46:15.584183  479667 logs.go:123] Gathering logs for kubelet ...
	I1222 23:46:15.584209  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:46:15.632476  479667 logs.go:123] Gathering logs for kube-apiserver [4db369cb7727] ...
	I1222 23:46:15.632505  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4db369cb7727"
	I1222 23:46:15.666897  479667 logs.go:123] Gathering logs for kube-scheduler [5fdee4dffac9] ...
	I1222 23:46:15.666928  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fdee4dffac9"
	I1222 23:46:15.693964  479667 logs.go:123] Gathering logs for container status ...
	I1222 23:46:15.693992  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:46:15.724031  479667 logs.go:123] Gathering logs for dmesg ...
	I1222 23:46:15.724061  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:46:15.742234  479667 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:46:15.742266  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:46:15.798575  479667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:46:15.798621  479667 logs.go:123] Gathering logs for etcd [eec44bb2b99c] ...
	I1222 23:46:15.798636  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eec44bb2b99c"
	I1222 23:46:15.827564  479667 logs.go:123] Gathering logs for kube-controller-manager [d0cd4466a37f] ...
	I1222 23:46:15.827606  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0cd4466a37f"
	I1222 23:46:18.356709  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:46:18.368040  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:46:18.387518  479667 logs.go:282] 1 containers: [4db369cb7727]
	I1222 23:46:18.387606  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:46:18.406546  479667 logs.go:282] 1 containers: [eec44bb2b99c]
	I1222 23:46:18.406639  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:46:18.425204  479667 logs.go:282] 0 containers: []
	W1222 23:46:18.425230  479667 logs.go:284] No container was found matching "coredns"
	I1222 23:46:18.425293  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:46:18.443932  479667 logs.go:282] 1 containers: [5fdee4dffac9]
	I1222 23:46:18.444012  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:46:18.462859  479667 logs.go:282] 0 containers: []
	W1222 23:46:18.462885  479667 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:46:18.462944  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:46:18.481970  479667 logs.go:282] 1 containers: [d0cd4466a37f]
	I1222 23:46:18.482041  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:46:18.500906  479667 logs.go:282] 0 containers: []
	W1222 23:46:18.500928  479667 logs.go:284] No container was found matching "kindnet"
	I1222 23:46:18.500973  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1222 23:46:18.519062  479667 logs.go:282] 0 containers: []
	W1222 23:46:18.519090  479667 logs.go:284] No container was found matching "storage-provisioner"
	I1222 23:46:18.519110  479667 logs.go:123] Gathering logs for kubelet ...
	I1222 23:46:18.519121  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:46:18.571721  479667 logs.go:123] Gathering logs for dmesg ...
	I1222 23:46:18.571754  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:46:18.591277  479667 logs.go:123] Gathering logs for kube-apiserver [4db369cb7727] ...
	I1222 23:46:18.591306  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4db369cb7727"
	I1222 23:46:18.623304  479667 logs.go:123] Gathering logs for kube-scheduler [5fdee4dffac9] ...
	I1222 23:46:18.623333  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fdee4dffac9"
	I1222 23:46:18.650147  479667 logs.go:123] Gathering logs for kube-controller-manager [d0cd4466a37f] ...
	I1222 23:46:18.650182  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0cd4466a37f"
	I1222 23:46:18.678251  479667 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:46:18.678279  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:46:18.733573  479667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:46:18.733607  479667 logs.go:123] Gathering logs for etcd [eec44bb2b99c] ...
	I1222 23:46:18.733626  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eec44bb2b99c"
	I1222 23:46:18.761733  479667 logs.go:123] Gathering logs for Docker ...
	I1222 23:46:18.761761  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:46:18.784975  479667 logs.go:123] Gathering logs for container status ...
	I1222 23:46:18.785011  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:46:21.316916  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:46:21.329392  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:46:21.349706  479667 logs.go:282] 1 containers: [4db369cb7727]
	I1222 23:46:21.349784  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:46:21.368108  479667 logs.go:282] 1 containers: [eec44bb2b99c]
	I1222 23:46:21.368183  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:46:21.385847  479667 logs.go:282] 0 containers: []
	W1222 23:46:21.385870  479667 logs.go:284] No container was found matching "coredns"
	I1222 23:46:21.385921  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:46:21.404487  479667 logs.go:282] 1 containers: [5fdee4dffac9]
	I1222 23:46:21.404549  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:46:21.422776  479667 logs.go:282] 0 containers: []
	W1222 23:46:21.422805  479667 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:46:21.422860  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:46:21.441773  479667 logs.go:282] 1 containers: [d0cd4466a37f]
	I1222 23:46:21.441848  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:46:21.459876  479667 logs.go:282] 0 containers: []
	W1222 23:46:21.459898  479667 logs.go:284] No container was found matching "kindnet"
	I1222 23:46:21.459945  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1222 23:46:21.478182  479667 logs.go:282] 0 containers: []
	W1222 23:46:21.478206  479667 logs.go:284] No container was found matching "storage-provisioner"
	I1222 23:46:21.478225  479667 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:46:21.478236  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:46:21.533707  479667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:46:21.533727  479667 logs.go:123] Gathering logs for kube-apiserver [4db369cb7727] ...
	I1222 23:46:21.533739  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4db369cb7727"
	I1222 23:46:21.566385  479667 logs.go:123] Gathering logs for etcd [eec44bb2b99c] ...
	I1222 23:46:21.566422  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eec44bb2b99c"
	I1222 23:46:21.595488  479667 logs.go:123] Gathering logs for kube-controller-manager [d0cd4466a37f] ...
	I1222 23:46:21.595523  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0cd4466a37f"
	I1222 23:46:21.623173  479667 logs.go:123] Gathering logs for kubelet ...
	I1222 23:46:21.623202  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:46:21.670823  479667 logs.go:123] Gathering logs for dmesg ...
	I1222 23:46:21.670853  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:46:21.688525  479667 logs.go:123] Gathering logs for kube-scheduler [5fdee4dffac9] ...
	I1222 23:46:21.688551  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fdee4dffac9"
	I1222 23:46:21.715423  479667 logs.go:123] Gathering logs for Docker ...
	I1222 23:46:21.715451  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:46:21.738129  479667 logs.go:123] Gathering logs for container status ...
	I1222 23:46:21.738156  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:46:24.275608  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:46:24.287525  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:46:24.307382  479667 logs.go:282] 1 containers: [4db369cb7727]
	I1222 23:46:24.307467  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:46:24.326856  479667 logs.go:282] 1 containers: [eec44bb2b99c]
	I1222 23:46:24.326938  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:46:24.348795  479667 logs.go:282] 0 containers: []
	W1222 23:46:24.348820  479667 logs.go:284] No container was found matching "coredns"
	I1222 23:46:24.348877  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:46:24.367402  479667 logs.go:282] 1 containers: [5fdee4dffac9]
	I1222 23:46:24.367468  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:46:24.385513  479667 logs.go:282] 0 containers: []
	W1222 23:46:24.385533  479667 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:46:24.385618  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:46:24.404673  479667 logs.go:282] 1 containers: [d0cd4466a37f]
	I1222 23:46:24.404740  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:46:24.422436  479667 logs.go:282] 0 containers: []
	W1222 23:46:24.422456  479667 logs.go:284] No container was found matching "kindnet"
	I1222 23:46:24.422512  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1222 23:46:24.440889  479667 logs.go:282] 0 containers: []
	W1222 23:46:24.440914  479667 logs.go:284] No container was found matching "storage-provisioner"
	I1222 23:46:24.440933  479667 logs.go:123] Gathering logs for kubelet ...
	I1222 23:46:24.440949  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:46:24.491558  479667 logs.go:123] Gathering logs for dmesg ...
	I1222 23:46:24.491601  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:46:24.511430  479667 logs.go:123] Gathering logs for kube-apiserver [4db369cb7727] ...
	I1222 23:46:24.511455  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4db369cb7727"
	I1222 23:46:24.545937  479667 logs.go:123] Gathering logs for container status ...
	I1222 23:46:24.545967  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:46:24.577734  479667 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:46:24.577760  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:46:24.635287  479667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:46:24.635311  479667 logs.go:123] Gathering logs for etcd [eec44bb2b99c] ...
	I1222 23:46:24.635328  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eec44bb2b99c"
	I1222 23:46:24.667272  479667 logs.go:123] Gathering logs for kube-scheduler [5fdee4dffac9] ...
	I1222 23:46:24.667311  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fdee4dffac9"
	I1222 23:46:24.700839  479667 logs.go:123] Gathering logs for kube-controller-manager [d0cd4466a37f] ...
	I1222 23:46:24.700879  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0cd4466a37f"
	I1222 23:46:24.728855  479667 logs.go:123] Gathering logs for Docker ...
	I1222 23:46:24.728888  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:46:27.254123  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:46:27.265825  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:46:27.287659  479667 logs.go:282] 1 containers: [4db369cb7727]
	I1222 23:46:27.287739  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:46:27.306050  479667 logs.go:282] 1 containers: [eec44bb2b99c]
	I1222 23:46:27.306118  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:46:27.325131  479667 logs.go:282] 0 containers: []
	W1222 23:46:27.325157  479667 logs.go:284] No container was found matching "coredns"
	I1222 23:46:27.325208  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:46:27.343546  479667 logs.go:282] 1 containers: [5fdee4dffac9]
	I1222 23:46:27.343641  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:46:27.362899  479667 logs.go:282] 0 containers: []
	W1222 23:46:27.362927  479667 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:46:27.362981  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:46:27.381005  479667 logs.go:282] 1 containers: [d0cd4466a37f]
	I1222 23:46:27.381067  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:46:27.397796  479667 logs.go:282] 0 containers: []
	W1222 23:46:27.397825  479667 logs.go:284] No container was found matching "kindnet"
	I1222 23:46:27.397880  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1222 23:46:27.416356  479667 logs.go:282] 0 containers: []
	W1222 23:46:27.416376  479667 logs.go:284] No container was found matching "storage-provisioner"
	I1222 23:46:27.416391  479667 logs.go:123] Gathering logs for etcd [eec44bb2b99c] ...
	I1222 23:46:27.416403  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eec44bb2b99c"
	I1222 23:46:27.444401  479667 logs.go:123] Gathering logs for kubelet ...
	I1222 23:46:27.444431  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:46:27.494262  479667 logs.go:123] Gathering logs for kube-apiserver [4db369cb7727] ...
	I1222 23:46:27.494293  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4db369cb7727"
	I1222 23:46:27.527621  479667 logs.go:123] Gathering logs for kube-scheduler [5fdee4dffac9] ...
	I1222 23:46:27.527648  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fdee4dffac9"
	I1222 23:46:27.556389  479667 logs.go:123] Gathering logs for kube-controller-manager [d0cd4466a37f] ...
	I1222 23:46:27.556420  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0cd4466a37f"
	I1222 23:46:27.584094  479667 logs.go:123] Gathering logs for Docker ...
	I1222 23:46:27.584122  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:46:27.607875  479667 logs.go:123] Gathering logs for container status ...
	I1222 23:46:27.607903  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:46:27.636272  479667 logs.go:123] Gathering logs for dmesg ...
	I1222 23:46:27.636295  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:46:27.653782  479667 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:46:27.653808  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:46:27.707995  479667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:46:30.208731  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:46:30.220047  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:46:30.240372  479667 logs.go:282] 1 containers: [4db369cb7727]
	I1222 23:46:30.240446  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:46:30.261206  479667 logs.go:282] 1 containers: [eec44bb2b99c]
	I1222 23:46:30.261296  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:46:30.282042  479667 logs.go:282] 0 containers: []
	W1222 23:46:30.282071  479667 logs.go:284] No container was found matching "coredns"
	I1222 23:46:30.282133  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:46:30.302802  479667 logs.go:282] 1 containers: [5fdee4dffac9]
	I1222 23:46:30.302898  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:46:30.321828  479667 logs.go:282] 0 containers: []
	W1222 23:46:30.321848  479667 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:46:30.321889  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:46:30.341965  479667 logs.go:282] 1 containers: [d0cd4466a37f]
	I1222 23:46:30.342037  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:46:30.360684  479667 logs.go:282] 0 containers: []
	W1222 23:46:30.360714  479667 logs.go:284] No container was found matching "kindnet"
	I1222 23:46:30.360761  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1222 23:46:30.379071  479667 logs.go:282] 0 containers: []
	W1222 23:46:30.379099  479667 logs.go:284] No container was found matching "storage-provisioner"
	I1222 23:46:30.379118  479667 logs.go:123] Gathering logs for kubelet ...
	I1222 23:46:30.379129  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:46:30.429146  479667 logs.go:123] Gathering logs for dmesg ...
	I1222 23:46:30.429182  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:46:30.447395  479667 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:46:30.447424  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:46:30.503099  479667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:46:30.503118  479667 logs.go:123] Gathering logs for kube-scheduler [5fdee4dffac9] ...
	I1222 23:46:30.503132  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fdee4dffac9"
	I1222 23:46:30.530795  479667 logs.go:123] Gathering logs for kube-apiserver [4db369cb7727] ...
	I1222 23:46:30.530826  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4db369cb7727"
	I1222 23:46:30.564447  479667 logs.go:123] Gathering logs for etcd [eec44bb2b99c] ...
	I1222 23:46:30.564478  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eec44bb2b99c"
	I1222 23:46:30.592675  479667 logs.go:123] Gathering logs for kube-controller-manager [d0cd4466a37f] ...
	I1222 23:46:30.592703  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0cd4466a37f"
	I1222 23:46:30.618949  479667 logs.go:123] Gathering logs for Docker ...
	I1222 23:46:30.618975  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:46:30.640840  479667 logs.go:123] Gathering logs for container status ...
	I1222 23:46:30.640866  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:46:33.174355  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:46:33.185502  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:46:33.205488  479667 logs.go:282] 1 containers: [4db369cb7727]
	I1222 23:46:33.205569  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:46:33.223785  479667 logs.go:282] 1 containers: [eec44bb2b99c]
	I1222 23:46:33.223845  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:46:33.242389  479667 logs.go:282] 0 containers: []
	W1222 23:46:33.242417  479667 logs.go:284] No container was found matching "coredns"
	I1222 23:46:33.242478  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:46:33.261534  479667 logs.go:282] 1 containers: [5fdee4dffac9]
	I1222 23:46:33.261647  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:46:33.283619  479667 logs.go:282] 0 containers: []
	W1222 23:46:33.283651  479667 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:46:33.283715  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:46:33.304040  479667 logs.go:282] 1 containers: [d0cd4466a37f]
	I1222 23:46:33.304109  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:46:33.322484  479667 logs.go:282] 0 containers: []
	W1222 23:46:33.322508  479667 logs.go:284] No container was found matching "kindnet"
	I1222 23:46:33.322554  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1222 23:46:33.340931  479667 logs.go:282] 0 containers: []
	W1222 23:46:33.340958  479667 logs.go:284] No container was found matching "storage-provisioner"
	I1222 23:46:33.340976  479667 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:46:33.340989  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:46:33.396441  479667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:46:33.396459  479667 logs.go:123] Gathering logs for etcd [eec44bb2b99c] ...
	I1222 23:46:33.396471  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eec44bb2b99c"
	I1222 23:46:33.424944  479667 logs.go:123] Gathering logs for kube-scheduler [5fdee4dffac9] ...
	I1222 23:46:33.424973  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fdee4dffac9"
	I1222 23:46:33.450889  479667 logs.go:123] Gathering logs for kube-controller-manager [d0cd4466a37f] ...
	I1222 23:46:33.450921  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0cd4466a37f"
	I1222 23:46:33.480221  479667 logs.go:123] Gathering logs for container status ...
	I1222 23:46:33.480253  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:46:33.510132  479667 logs.go:123] Gathering logs for kubelet ...
	I1222 23:46:33.510158  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:46:33.557794  479667 logs.go:123] Gathering logs for kube-apiserver [4db369cb7727] ...
	I1222 23:46:33.557824  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4db369cb7727"
	I1222 23:46:33.589998  479667 logs.go:123] Gathering logs for Docker ...
	I1222 23:46:33.590030  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:46:33.613259  479667 logs.go:123] Gathering logs for dmesg ...
	I1222 23:46:33.613283  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:46:36.132149  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:46:36.144230  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:46:36.163995  479667 logs.go:282] 1 containers: [4db369cb7727]
	I1222 23:46:36.164063  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:46:36.182952  479667 logs.go:282] 1 containers: [eec44bb2b99c]
	I1222 23:46:36.183031  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:46:36.201589  479667 logs.go:282] 0 containers: []
	W1222 23:46:36.201629  479667 logs.go:284] No container was found matching "coredns"
	I1222 23:46:36.201692  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:46:36.221773  479667 logs.go:282] 1 containers: [5fdee4dffac9]
	I1222 23:46:36.221855  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:46:36.240640  479667 logs.go:282] 0 containers: []
	W1222 23:46:36.240666  479667 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:46:36.240714  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:46:36.259720  479667 logs.go:282] 1 containers: [d0cd4466a37f]
	I1222 23:46:36.259788  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:46:36.279172  479667 logs.go:282] 0 containers: []
	W1222 23:46:36.279197  479667 logs.go:284] No container was found matching "kindnet"
	I1222 23:46:36.279260  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1222 23:46:36.299716  479667 logs.go:282] 0 containers: []
	W1222 23:46:36.299745  479667 logs.go:284] No container was found matching "storage-provisioner"
	I1222 23:46:36.299762  479667 logs.go:123] Gathering logs for kube-scheduler [5fdee4dffac9] ...
	I1222 23:46:36.299773  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fdee4dffac9"
	I1222 23:46:36.327216  479667 logs.go:123] Gathering logs for kube-controller-manager [d0cd4466a37f] ...
	I1222 23:46:36.327243  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0cd4466a37f"
	I1222 23:46:36.354641  479667 logs.go:123] Gathering logs for kube-apiserver [4db369cb7727] ...
	I1222 23:46:36.354670  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4db369cb7727"
	I1222 23:46:36.388459  479667 logs.go:123] Gathering logs for etcd [eec44bb2b99c] ...
	I1222 23:46:36.388490  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eec44bb2b99c"
	I1222 23:46:36.416168  479667 logs.go:123] Gathering logs for Docker ...
	I1222 23:46:36.416195  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:46:36.438738  479667 logs.go:123] Gathering logs for container status ...
	I1222 23:46:36.438767  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:46:36.467586  479667 logs.go:123] Gathering logs for kubelet ...
	I1222 23:46:36.467631  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:46:36.517677  479667 logs.go:123] Gathering logs for dmesg ...
	I1222 23:46:36.517715  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:46:36.535211  479667 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:46:36.535238  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:46:36.590651  479667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:46:39.092311  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:46:39.104299  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:46:39.123177  479667 logs.go:282] 1 containers: [4db369cb7727]
	I1222 23:46:39.123243  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:46:39.141814  479667 logs.go:282] 1 containers: [eec44bb2b99c]
	I1222 23:46:39.141877  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:46:39.160079  479667 logs.go:282] 0 containers: []
	W1222 23:46:39.160109  479667 logs.go:284] No container was found matching "coredns"
	I1222 23:46:39.160162  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:46:39.178735  479667 logs.go:282] 1 containers: [5fdee4dffac9]
	I1222 23:46:39.178808  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:46:39.197299  479667 logs.go:282] 0 containers: []
	W1222 23:46:39.197321  479667 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:46:39.197373  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:46:39.216051  479667 logs.go:282] 1 containers: [d0cd4466a37f]
	I1222 23:46:39.216132  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:46:39.236696  479667 logs.go:282] 0 containers: []
	W1222 23:46:39.236722  479667 logs.go:284] No container was found matching "kindnet"
	I1222 23:46:39.236781  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1222 23:46:39.257254  479667 logs.go:282] 0 containers: []
	W1222 23:46:39.257278  479667 logs.go:284] No container was found matching "storage-provisioner"
	I1222 23:46:39.257293  479667 logs.go:123] Gathering logs for kube-apiserver [4db369cb7727] ...
	I1222 23:46:39.257304  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4db369cb7727"
	I1222 23:46:39.294060  479667 logs.go:123] Gathering logs for etcd [eec44bb2b99c] ...
	I1222 23:46:39.294090  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eec44bb2b99c"
	I1222 23:46:39.323408  479667 logs.go:123] Gathering logs for kube-scheduler [5fdee4dffac9] ...
	I1222 23:46:39.323438  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fdee4dffac9"
	I1222 23:46:39.351677  479667 logs.go:123] Gathering logs for Docker ...
	I1222 23:46:39.351705  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:46:39.374466  479667 logs.go:123] Gathering logs for container status ...
	I1222 23:46:39.374497  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:46:39.403874  479667 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:46:39.403904  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:46:39.458581  479667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:46:39.458618  479667 logs.go:123] Gathering logs for kube-controller-manager [d0cd4466a37f] ...
	I1222 23:46:39.458634  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0cd4466a37f"
	I1222 23:46:39.486418  479667 logs.go:123] Gathering logs for kubelet ...
	I1222 23:46:39.486456  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:46:39.534589  479667 logs.go:123] Gathering logs for dmesg ...
	I1222 23:46:39.534628  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:46:42.052893  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:46:42.064170  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:46:42.082955  479667 logs.go:282] 1 containers: [4db369cb7727]
	I1222 23:46:42.083028  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:46:42.103523  479667 logs.go:282] 1 containers: [eec44bb2b99c]
	I1222 23:46:42.103627  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:46:42.122185  479667 logs.go:282] 0 containers: []
	W1222 23:46:42.122212  479667 logs.go:284] No container was found matching "coredns"
	I1222 23:46:42.122264  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:46:42.141087  479667 logs.go:282] 1 containers: [5fdee4dffac9]
	I1222 23:46:42.141161  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:46:42.159287  479667 logs.go:282] 0 containers: []
	W1222 23:46:42.159308  479667 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:46:42.159353  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:46:42.178358  479667 logs.go:282] 1 containers: [d0cd4466a37f]
	I1222 23:46:42.178437  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:46:42.197432  479667 logs.go:282] 0 containers: []
	W1222 23:46:42.197459  479667 logs.go:284] No container was found matching "kindnet"
	I1222 23:46:42.197516  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1222 23:46:42.216290  479667 logs.go:282] 0 containers: []
	W1222 23:46:42.216317  479667 logs.go:284] No container was found matching "storage-provisioner"
	I1222 23:46:42.216349  479667 logs.go:123] Gathering logs for etcd [eec44bb2b99c] ...
	I1222 23:46:42.216368  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eec44bb2b99c"
	I1222 23:46:42.244368  479667 logs.go:123] Gathering logs for kube-scheduler [5fdee4dffac9] ...
	I1222 23:46:42.244401  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fdee4dffac9"
	I1222 23:46:42.274817  479667 logs.go:123] Gathering logs for kube-controller-manager [d0cd4466a37f] ...
	I1222 23:46:42.274849  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0cd4466a37f"
	I1222 23:46:42.304046  479667 logs.go:123] Gathering logs for container status ...
	I1222 23:46:42.304073  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:46:42.334687  479667 logs.go:123] Gathering logs for Docker ...
	I1222 23:46:42.334713  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:46:42.357137  479667 logs.go:123] Gathering logs for kubelet ...
	I1222 23:46:42.357163  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:46:42.408240  479667 logs.go:123] Gathering logs for dmesg ...
	I1222 23:46:42.408277  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:46:42.426560  479667 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:46:42.426589  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:46:42.481008  479667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:46:42.481034  479667 logs.go:123] Gathering logs for kube-apiserver [4db369cb7727] ...
	I1222 23:46:42.481052  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4db369cb7727"
	I1222 23:46:45.014711  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:46:45.027303  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:46:45.047857  479667 logs.go:282] 1 containers: [4db369cb7727]
	I1222 23:46:45.047937  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:46:45.067265  479667 logs.go:282] 1 containers: [eec44bb2b99c]
	I1222 23:46:45.067342  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:46:45.084987  479667 logs.go:282] 0 containers: []
	W1222 23:46:45.085013  479667 logs.go:284] No container was found matching "coredns"
	I1222 23:46:45.085069  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:46:45.103674  479667 logs.go:282] 1 containers: [5fdee4dffac9]
	I1222 23:46:45.103756  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:46:45.122908  479667 logs.go:282] 0 containers: []
	W1222 23:46:45.122934  479667 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:46:45.122978  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:46:45.141690  479667 logs.go:282] 1 containers: [d0cd4466a37f]
	I1222 23:46:45.141752  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:46:45.159868  479667 logs.go:282] 0 containers: []
	W1222 23:46:45.159888  479667 logs.go:284] No container was found matching "kindnet"
	I1222 23:46:45.159929  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1222 23:46:45.178033  479667 logs.go:282] 0 containers: []
	W1222 23:46:45.178063  479667 logs.go:284] No container was found matching "storage-provisioner"
	I1222 23:46:45.178082  479667 logs.go:123] Gathering logs for kubelet ...
	I1222 23:46:45.178095  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:46:45.228347  479667 logs.go:123] Gathering logs for dmesg ...
	I1222 23:46:45.228375  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:46:45.245937  479667 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:46:45.245963  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:46:45.313156  479667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:46:45.313176  479667 logs.go:123] Gathering logs for kube-apiserver [4db369cb7727] ...
	I1222 23:46:45.313189  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4db369cb7727"
	I1222 23:46:45.350834  479667 logs.go:123] Gathering logs for Docker ...
	I1222 23:46:45.350868  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:46:45.374708  479667 logs.go:123] Gathering logs for etcd [eec44bb2b99c] ...
	I1222 23:46:45.374737  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eec44bb2b99c"
	I1222 23:46:45.406328  479667 logs.go:123] Gathering logs for kube-scheduler [5fdee4dffac9] ...
	I1222 23:46:45.406359  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fdee4dffac9"
	I1222 23:46:45.434481  479667 logs.go:123] Gathering logs for kube-controller-manager [d0cd4466a37f] ...
	I1222 23:46:45.434508  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0cd4466a37f"
	I1222 23:46:45.462666  479667 logs.go:123] Gathering logs for container status ...
	I1222 23:46:45.462695  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:46:47.993732  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:46:48.005665  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:46:48.026860  479667 logs.go:282] 1 containers: [4db369cb7727]
	I1222 23:46:48.026943  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:46:48.047482  479667 logs.go:282] 1 containers: [eec44bb2b99c]
	I1222 23:46:48.047556  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:46:48.065976  479667 logs.go:282] 0 containers: []
	W1222 23:46:48.066000  479667 logs.go:284] No container was found matching "coredns"
	I1222 23:46:48.066046  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:46:48.083787  479667 logs.go:282] 1 containers: [5fdee4dffac9]
	I1222 23:46:48.083850  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:46:48.102839  479667 logs.go:282] 0 containers: []
	W1222 23:46:48.102867  479667 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:46:48.102915  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:46:48.121492  479667 logs.go:282] 1 containers: [d0cd4466a37f]
	I1222 23:46:48.121554  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:46:48.139623  479667 logs.go:282] 0 containers: []
	W1222 23:46:48.139644  479667 logs.go:284] No container was found matching "kindnet"
	I1222 23:46:48.139687  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1222 23:46:48.158116  479667 logs.go:282] 0 containers: []
	W1222 23:46:48.158136  479667 logs.go:284] No container was found matching "storage-provisioner"
	I1222 23:46:48.158151  479667 logs.go:123] Gathering logs for kube-controller-manager [d0cd4466a37f] ...
	I1222 23:46:48.158161  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0cd4466a37f"
	I1222 23:46:48.184824  479667 logs.go:123] Gathering logs for Docker ...
	I1222 23:46:48.184849  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:46:48.206280  479667 logs.go:123] Gathering logs for container status ...
	I1222 23:46:48.206305  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:46:48.236769  479667 logs.go:123] Gathering logs for kubelet ...
	I1222 23:46:48.236804  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:46:48.296833  479667 logs.go:123] Gathering logs for dmesg ...
	I1222 23:46:48.296863  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:46:48.314431  479667 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:46:48.314457  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:46:48.368818  479667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:46:48.368840  479667 logs.go:123] Gathering logs for kube-apiserver [4db369cb7727] ...
	I1222 23:46:48.368851  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4db369cb7727"
	I1222 23:46:48.402455  479667 logs.go:123] Gathering logs for etcd [eec44bb2b99c] ...
	I1222 23:46:48.402484  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eec44bb2b99c"
	I1222 23:46:48.431059  479667 logs.go:123] Gathering logs for kube-scheduler [5fdee4dffac9] ...
	I1222 23:46:48.431086  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fdee4dffac9"
	I1222 23:46:50.959227  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:46:50.970507  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:46:50.990457  479667 logs.go:282] 1 containers: [4db369cb7727]
	I1222 23:46:50.990528  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:46:51.009704  479667 logs.go:282] 1 containers: [eec44bb2b99c]
	I1222 23:46:51.009782  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:46:51.031525  479667 logs.go:282] 0 containers: []
	W1222 23:46:51.031551  479667 logs.go:284] No container was found matching "coredns"
	I1222 23:46:51.031634  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:46:51.051131  479667 logs.go:282] 1 containers: [5fdee4dffac9]
	I1222 23:46:51.051213  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:46:51.069163  479667 logs.go:282] 0 containers: []
	W1222 23:46:51.069191  479667 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:46:51.069245  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:46:51.087794  479667 logs.go:282] 1 containers: [d0cd4466a37f]
	I1222 23:46:51.087869  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:46:51.107542  479667 logs.go:282] 0 containers: []
	W1222 23:46:51.107568  479667 logs.go:284] No container was found matching "kindnet"
	I1222 23:46:51.107634  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1222 23:46:51.125949  479667 logs.go:282] 0 containers: []
	W1222 23:46:51.125971  479667 logs.go:284] No container was found matching "storage-provisioner"
	I1222 23:46:51.125992  479667 logs.go:123] Gathering logs for kubelet ...
	I1222 23:46:51.126004  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:46:51.176747  479667 logs.go:123] Gathering logs for dmesg ...
	I1222 23:46:51.176775  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:46:51.194032  479667 logs.go:123] Gathering logs for kube-apiserver [4db369cb7727] ...
	I1222 23:46:51.194056  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4db369cb7727"
	I1222 23:46:51.228800  479667 logs.go:123] Gathering logs for Docker ...
	I1222 23:46:51.228834  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:46:51.256189  479667 logs.go:123] Gathering logs for container status ...
	I1222 23:46:51.256227  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:46:51.288694  479667 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:46:51.288722  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:46:51.344729  479667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:46:51.344751  479667 logs.go:123] Gathering logs for etcd [eec44bb2b99c] ...
	I1222 23:46:51.344763  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eec44bb2b99c"
	I1222 23:46:51.375430  479667 logs.go:123] Gathering logs for kube-scheduler [5fdee4dffac9] ...
	I1222 23:46:51.375456  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fdee4dffac9"
	I1222 23:46:51.401947  479667 logs.go:123] Gathering logs for kube-controller-manager [d0cd4466a37f] ...
	I1222 23:46:51.401974  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0cd4466a37f"
	I1222 23:46:53.932658  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:46:53.943747  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:46:53.964313  479667 logs.go:282] 1 containers: [4db369cb7727]
	I1222 23:46:53.964380  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:46:53.983190  479667 logs.go:282] 1 containers: [eec44bb2b99c]
	I1222 23:46:53.983257  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:46:54.002082  479667 logs.go:282] 0 containers: []
	W1222 23:46:54.002110  479667 logs.go:284] No container was found matching "coredns"
	I1222 23:46:54.002162  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:46:54.022256  479667 logs.go:282] 1 containers: [5fdee4dffac9]
	I1222 23:46:54.022340  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:46:54.043716  479667 logs.go:282] 0 containers: []
	W1222 23:46:54.043744  479667 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:46:54.043800  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:46:54.062741  479667 logs.go:282] 1 containers: [d0cd4466a37f]
	I1222 23:46:54.062828  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:46:54.081647  479667 logs.go:282] 0 containers: []
	W1222 23:46:54.081675  479667 logs.go:284] No container was found matching "kindnet"
	I1222 23:46:54.081738  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1222 23:46:54.099995  479667 logs.go:282] 0 containers: []
	W1222 23:46:54.100026  479667 logs.go:284] No container was found matching "storage-provisioner"
	I1222 23:46:54.100046  479667 logs.go:123] Gathering logs for etcd [eec44bb2b99c] ...
	I1222 23:46:54.100061  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eec44bb2b99c"
	I1222 23:46:54.126195  479667 logs.go:123] Gathering logs for kube-scheduler [5fdee4dffac9] ...
	I1222 23:46:54.126219  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fdee4dffac9"
	I1222 23:46:54.152975  479667 logs.go:123] Gathering logs for Docker ...
	I1222 23:46:54.153003  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:46:54.174857  479667 logs.go:123] Gathering logs for kubelet ...
	I1222 23:46:54.174881  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:46:54.223221  479667 logs.go:123] Gathering logs for dmesg ...
	I1222 23:46:54.223247  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:46:54.240558  479667 logs.go:123] Gathering logs for kube-apiserver [4db369cb7727] ...
	I1222 23:46:54.240584  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4db369cb7727"
	I1222 23:46:54.275147  479667 logs.go:123] Gathering logs for kube-controller-manager [d0cd4466a37f] ...
	I1222 23:46:54.275174  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0cd4466a37f"
	I1222 23:46:54.304504  479667 logs.go:123] Gathering logs for container status ...
	I1222 23:46:54.304543  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:46:54.348615  479667 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:46:54.348644  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:46:54.405502  479667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:46:56.905863  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:46:56.916970  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:46:56.937105  479667 logs.go:282] 1 containers: [4db369cb7727]
	I1222 23:46:56.937171  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:46:56.956305  479667 logs.go:282] 1 containers: [eec44bb2b99c]
	I1222 23:46:56.956395  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:46:56.975142  479667 logs.go:282] 0 containers: []
	W1222 23:46:56.975169  479667 logs.go:284] No container was found matching "coredns"
	I1222 23:46:56.975234  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:46:56.995422  479667 logs.go:282] 1 containers: [5fdee4dffac9]
	I1222 23:46:56.995494  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:46:57.017973  479667 logs.go:282] 0 containers: []
	W1222 23:46:57.018002  479667 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:46:57.018059  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:46:57.040588  479667 logs.go:282] 1 containers: [d0cd4466a37f]
	I1222 23:46:57.040697  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:46:57.060500  479667 logs.go:282] 0 containers: []
	W1222 23:46:57.060523  479667 logs.go:284] No container was found matching "kindnet"
	I1222 23:46:57.060574  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1222 23:46:57.079736  479667 logs.go:282] 0 containers: []
	W1222 23:46:57.079760  479667 logs.go:284] No container was found matching "storage-provisioner"
	I1222 23:46:57.079779  479667 logs.go:123] Gathering logs for etcd [eec44bb2b99c] ...
	I1222 23:46:57.079794  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eec44bb2b99c"
	I1222 23:46:57.107244  479667 logs.go:123] Gathering logs for kube-controller-manager [d0cd4466a37f] ...
	I1222 23:46:57.107270  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0cd4466a37f"
	I1222 23:46:57.134615  479667 logs.go:123] Gathering logs for Docker ...
	I1222 23:46:57.134642  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:46:57.157633  479667 logs.go:123] Gathering logs for kube-apiserver [4db369cb7727] ...
	I1222 23:46:57.157658  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4db369cb7727"
	I1222 23:46:57.189641  479667 logs.go:123] Gathering logs for kube-scheduler [5fdee4dffac9] ...
	I1222 23:46:57.189668  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fdee4dffac9"
	I1222 23:46:57.216667  479667 logs.go:123] Gathering logs for container status ...
	I1222 23:46:57.216693  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:46:57.247646  479667 logs.go:123] Gathering logs for kubelet ...
	I1222 23:46:57.247680  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:46:57.298763  479667 logs.go:123] Gathering logs for dmesg ...
	I1222 23:46:57.298792  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:46:57.316533  479667 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:46:57.316559  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:46:57.371577  479667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:46:59.872732  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:46:59.885040  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:46:59.905266  479667 logs.go:282] 1 containers: [4db369cb7727]
	I1222 23:46:59.905349  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:46:59.924471  479667 logs.go:282] 1 containers: [eec44bb2b99c]
	I1222 23:46:59.924539  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:46:59.943336  479667 logs.go:282] 0 containers: []
	W1222 23:46:59.943365  479667 logs.go:284] No container was found matching "coredns"
	I1222 23:46:59.943423  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:46:59.962692  479667 logs.go:282] 1 containers: [5fdee4dffac9]
	I1222 23:46:59.962777  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:46:59.984069  479667 logs.go:282] 0 containers: []
	W1222 23:46:59.984096  479667 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:46:59.984151  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:47:00.004478  479667 logs.go:282] 1 containers: [d0cd4466a37f]
	I1222 23:47:00.004555  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:47:00.028986  479667 logs.go:282] 0 containers: []
	W1222 23:47:00.029007  479667 logs.go:284] No container was found matching "kindnet"
	I1222 23:47:00.029057  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1222 23:47:00.051160  479667 logs.go:282] 0 containers: []
	W1222 23:47:00.051184  479667 logs.go:284] No container was found matching "storage-provisioner"
	I1222 23:47:00.051205  479667 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:47:00.051222  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:47:00.106619  479667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:47:00.106639  479667 logs.go:123] Gathering logs for etcd [eec44bb2b99c] ...
	I1222 23:47:00.106656  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eec44bb2b99c"
	I1222 23:47:00.134797  479667 logs.go:123] Gathering logs for kube-scheduler [5fdee4dffac9] ...
	I1222 23:47:00.134824  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fdee4dffac9"
	I1222 23:47:00.161908  479667 logs.go:123] Gathering logs for kubelet ...
	I1222 23:47:00.161935  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:47:00.211797  479667 logs.go:123] Gathering logs for kube-apiserver [4db369cb7727] ...
	I1222 23:47:00.211825  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4db369cb7727"
	I1222 23:47:00.245420  479667 logs.go:123] Gathering logs for kube-controller-manager [d0cd4466a37f] ...
	I1222 23:47:00.245454  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0cd4466a37f"
	I1222 23:47:00.278103  479667 logs.go:123] Gathering logs for Docker ...
	I1222 23:47:00.278135  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:47:00.300763  479667 logs.go:123] Gathering logs for container status ...
	I1222 23:47:00.300790  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:47:00.328736  479667 logs.go:123] Gathering logs for dmesg ...
	I1222 23:47:00.328763  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:47:02.846743  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:47:02.858392  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:47:02.879339  479667 logs.go:282] 1 containers: [4db369cb7727]
	I1222 23:47:02.879428  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:47:02.898353  479667 logs.go:282] 1 containers: [eec44bb2b99c]
	I1222 23:47:02.898422  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:47:02.915481  479667 logs.go:282] 0 containers: []
	W1222 23:47:02.915501  479667 logs.go:284] No container was found matching "coredns"
	I1222 23:47:02.915552  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:47:02.934563  479667 logs.go:282] 1 containers: [5fdee4dffac9]
	I1222 23:47:02.934653  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:47:02.952923  479667 logs.go:282] 0 containers: []
	W1222 23:47:02.952945  479667 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:47:02.952994  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:47:02.971809  479667 logs.go:282] 1 containers: [d0cd4466a37f]
	I1222 23:47:02.971885  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:47:02.990568  479667 logs.go:282] 0 containers: []
	W1222 23:47:02.990588  479667 logs.go:284] No container was found matching "kindnet"
	I1222 23:47:02.990655  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1222 23:47:03.009933  479667 logs.go:282] 0 containers: []
	W1222 23:47:03.009959  479667 logs.go:284] No container was found matching "storage-provisioner"
	I1222 23:47:03.009977  479667 logs.go:123] Gathering logs for dmesg ...
	I1222 23:47:03.009992  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:47:03.030183  479667 logs.go:123] Gathering logs for kube-apiserver [4db369cb7727] ...
	I1222 23:47:03.030214  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4db369cb7727"
	I1222 23:47:03.063677  479667 logs.go:123] Gathering logs for etcd [eec44bb2b99c] ...
	I1222 23:47:03.063709  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eec44bb2b99c"
	I1222 23:47:03.090067  479667 logs.go:123] Gathering logs for kube-scheduler [5fdee4dffac9] ...
	I1222 23:47:03.090096  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fdee4dffac9"
	I1222 23:47:03.117772  479667 logs.go:123] Gathering logs for container status ...
	I1222 23:47:03.117800  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:47:03.146232  479667 logs.go:123] Gathering logs for kubelet ...
	I1222 23:47:03.146262  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:47:03.196167  479667 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:47:03.196194  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:47:03.251451  479667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:47:03.251479  479667 logs.go:123] Gathering logs for kube-controller-manager [d0cd4466a37f] ...
	I1222 23:47:03.251497  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0cd4466a37f"
	I1222 23:47:03.281850  479667 logs.go:123] Gathering logs for Docker ...
	I1222 23:47:03.281881  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:47:05.805305  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:47:05.817657  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:47:05.840207  479667 logs.go:282] 1 containers: [4db369cb7727]
	I1222 23:47:05.840280  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:47:05.859817  479667 logs.go:282] 1 containers: [eec44bb2b99c]
	I1222 23:47:05.859885  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:47:05.878828  479667 logs.go:282] 0 containers: []
	W1222 23:47:05.878856  479667 logs.go:284] No container was found matching "coredns"
	I1222 23:47:05.878908  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:47:05.898373  479667 logs.go:282] 1 containers: [5fdee4dffac9]
	I1222 23:47:05.898447  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:47:05.916496  479667 logs.go:282] 0 containers: []
	W1222 23:47:05.916522  479667 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:47:05.916571  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:47:05.937042  479667 logs.go:282] 1 containers: [d0cd4466a37f]
	I1222 23:47:05.937137  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:47:05.956728  479667 logs.go:282] 0 containers: []
	W1222 23:47:05.956751  479667 logs.go:284] No container was found matching "kindnet"
	I1222 23:47:05.956806  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1222 23:47:05.976710  479667 logs.go:282] 0 containers: []
	W1222 23:47:05.976731  479667 logs.go:284] No container was found matching "storage-provisioner"
	I1222 23:47:05.976747  479667 logs.go:123] Gathering logs for etcd [eec44bb2b99c] ...
	I1222 23:47:05.976757  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eec44bb2b99c"
	I1222 23:47:06.011758  479667 logs.go:123] Gathering logs for kube-scheduler [5fdee4dffac9] ...
	I1222 23:47:06.011798  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fdee4dffac9"
	I1222 23:47:06.043990  479667 logs.go:123] Gathering logs for kube-controller-manager [d0cd4466a37f] ...
	I1222 23:47:06.044021  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0cd4466a37f"
	I1222 23:47:06.076161  479667 logs.go:123] Gathering logs for Docker ...
	I1222 23:47:06.076189  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:47:06.099737  479667 logs.go:123] Gathering logs for container status ...
	I1222 23:47:06.099771  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:47:06.136126  479667 logs.go:123] Gathering logs for kubelet ...
	I1222 23:47:06.136156  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:47:06.188373  479667 logs.go:123] Gathering logs for dmesg ...
	I1222 23:47:06.188412  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:47:06.206422  479667 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:47:06.206447  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:47:06.277241  479667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:47:06.277265  479667 logs.go:123] Gathering logs for kube-apiserver [4db369cb7727] ...
	I1222 23:47:06.277282  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4db369cb7727"
	I1222 23:47:08.824318  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:47:08.835309  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:47:08.855879  479667 logs.go:282] 1 containers: [4db369cb7727]
	I1222 23:47:08.855958  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:47:08.875133  479667 logs.go:282] 1 containers: [eec44bb2b99c]
	I1222 23:47:08.875191  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:47:08.894210  479667 logs.go:282] 0 containers: []
	W1222 23:47:08.894235  479667 logs.go:284] No container was found matching "coredns"
	I1222 23:47:08.894292  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:47:08.912904  479667 logs.go:282] 1 containers: [5fdee4dffac9]
	I1222 23:47:08.912963  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:47:08.932065  479667 logs.go:282] 0 containers: []
	W1222 23:47:08.932093  479667 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:47:08.932144  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:47:08.951519  479667 logs.go:282] 1 containers: [d0cd4466a37f]
	I1222 23:47:08.951613  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:47:08.970114  479667 logs.go:282] 0 containers: []
	W1222 23:47:08.970135  479667 logs.go:284] No container was found matching "kindnet"
	I1222 23:47:08.970191  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1222 23:47:08.988397  479667 logs.go:282] 0 containers: []
	W1222 23:47:08.988421  479667 logs.go:284] No container was found matching "storage-provisioner"
	I1222 23:47:08.988439  479667 logs.go:123] Gathering logs for dmesg ...
	I1222 23:47:08.988455  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:47:09.006086  479667 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:47:09.006114  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:47:09.066158  479667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:47:09.066180  479667 logs.go:123] Gathering logs for kube-apiserver [4db369cb7727] ...
	I1222 23:47:09.066195  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4db369cb7727"
	I1222 23:47:09.098667  479667 logs.go:123] Gathering logs for etcd [eec44bb2b99c] ...
	I1222 23:47:09.098697  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eec44bb2b99c"
	I1222 23:47:09.126940  479667 logs.go:123] Gathering logs for kube-scheduler [5fdee4dffac9] ...
	I1222 23:47:09.126968  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fdee4dffac9"
	I1222 23:47:09.152986  479667 logs.go:123] Gathering logs for kube-controller-manager [d0cd4466a37f] ...
	I1222 23:47:09.153017  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0cd4466a37f"
	I1222 23:47:09.180612  479667 logs.go:123] Gathering logs for kubelet ...
	I1222 23:47:09.180638  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:47:09.229346  479667 logs.go:123] Gathering logs for Docker ...
	I1222 23:47:09.229384  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:47:09.250399  479667 logs.go:123] Gathering logs for container status ...
	I1222 23:47:09.250446  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:47:11.785654  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:47:11.796849  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:47:11.816445  479667 logs.go:282] 1 containers: [4db369cb7727]
	I1222 23:47:11.816538  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:47:11.835694  479667 logs.go:282] 1 containers: [eec44bb2b99c]
	I1222 23:47:11.835758  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:47:11.852384  479667 logs.go:282] 0 containers: []
	W1222 23:47:11.852408  479667 logs.go:284] No container was found matching "coredns"
	I1222 23:47:11.852455  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:47:11.873559  479667 logs.go:282] 1 containers: [5fdee4dffac9]
	I1222 23:47:11.873651  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:47:11.895497  479667 logs.go:282] 0 containers: []
	W1222 23:47:11.895519  479667 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:47:11.895563  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:47:11.915682  479667 logs.go:282] 1 containers: [d0cd4466a37f]
	I1222 23:47:11.915750  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:47:11.934380  479667 logs.go:282] 0 containers: []
	W1222 23:47:11.934401  479667 logs.go:284] No container was found matching "kindnet"
	I1222 23:47:11.934446  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1222 23:47:11.952939  479667 logs.go:282] 0 containers: []
	W1222 23:47:11.952959  479667 logs.go:284] No container was found matching "storage-provisioner"
	I1222 23:47:11.952974  479667 logs.go:123] Gathering logs for kubelet ...
	I1222 23:47:11.952984  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:47:12.004400  479667 logs.go:123] Gathering logs for dmesg ...
	I1222 23:47:12.004436  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:47:12.024845  479667 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:47:12.024878  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:47:12.081371  479667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:47:12.081390  479667 logs.go:123] Gathering logs for kube-scheduler [5fdee4dffac9] ...
	I1222 23:47:12.081404  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fdee4dffac9"
	I1222 23:47:12.110015  479667 logs.go:123] Gathering logs for kube-apiserver [4db369cb7727] ...
	I1222 23:47:12.110044  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4db369cb7727"
	I1222 23:47:12.142354  479667 logs.go:123] Gathering logs for etcd [eec44bb2b99c] ...
	I1222 23:47:12.142384  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eec44bb2b99c"
	I1222 23:47:12.169902  479667 logs.go:123] Gathering logs for kube-controller-manager [d0cd4466a37f] ...
	I1222 23:47:12.169931  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0cd4466a37f"
	I1222 23:47:12.198046  479667 logs.go:123] Gathering logs for Docker ...
	I1222 23:47:12.198072  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:47:12.220368  479667 logs.go:123] Gathering logs for container status ...
	I1222 23:47:12.220394  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:47:14.751150  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:47:14.762663  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:47:14.781975  479667 logs.go:282] 1 containers: [4db369cb7727]
	I1222 23:47:14.782036  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:47:14.800312  479667 logs.go:282] 1 containers: [eec44bb2b99c]
	I1222 23:47:14.800386  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:47:14.818919  479667 logs.go:282] 0 containers: []
	W1222 23:47:14.818946  479667 logs.go:284] No container was found matching "coredns"
	I1222 23:47:14.819002  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:47:14.838129  479667 logs.go:282] 1 containers: [5fdee4dffac9]
	I1222 23:47:14.838198  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:47:14.856519  479667 logs.go:282] 0 containers: []
	W1222 23:47:14.856542  479667 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:47:14.856583  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:47:14.876520  479667 logs.go:282] 1 containers: [d0cd4466a37f]
	I1222 23:47:14.876607  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:47:14.895641  479667 logs.go:282] 0 containers: []
	W1222 23:47:14.895661  479667 logs.go:284] No container was found matching "kindnet"
	I1222 23:47:14.895708  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1222 23:47:14.914068  479667 logs.go:282] 0 containers: []
	W1222 23:47:14.914095  479667 logs.go:284] No container was found matching "storage-provisioner"
	I1222 23:47:14.914111  479667 logs.go:123] Gathering logs for kubelet ...
	I1222 23:47:14.914122  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:47:14.962769  479667 logs.go:123] Gathering logs for kube-controller-manager [d0cd4466a37f] ...
	I1222 23:47:14.962805  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0cd4466a37f"
	I1222 23:47:14.995548  479667 logs.go:123] Gathering logs for Docker ...
	I1222 23:47:14.995580  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:47:15.019557  479667 logs.go:123] Gathering logs for dmesg ...
	I1222 23:47:15.019602  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:47:15.039377  479667 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:47:15.039405  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:47:15.092973  479667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:47:15.092994  479667 logs.go:123] Gathering logs for kube-apiserver [4db369cb7727] ...
	I1222 23:47:15.093006  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4db369cb7727"
	I1222 23:47:15.126745  479667 logs.go:123] Gathering logs for etcd [eec44bb2b99c] ...
	I1222 23:47:15.126778  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eec44bb2b99c"
	I1222 23:47:15.154198  479667 logs.go:123] Gathering logs for kube-scheduler [5fdee4dffac9] ...
	I1222 23:47:15.154225  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fdee4dffac9"
	I1222 23:47:15.181106  479667 logs.go:123] Gathering logs for container status ...
	I1222 23:47:15.181133  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:47:17.709562  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:47:17.721701  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:47:17.747080  479667 logs.go:282] 1 containers: [4db369cb7727]
	I1222 23:47:17.747159  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:47:17.769175  479667 logs.go:282] 1 containers: [eec44bb2b99c]
	I1222 23:47:17.769246  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:47:17.789384  479667 logs.go:282] 0 containers: []
	W1222 23:47:17.789412  479667 logs.go:284] No container was found matching "coredns"
	I1222 23:47:17.789471  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:47:17.814297  479667 logs.go:282] 1 containers: [5fdee4dffac9]
	I1222 23:47:17.814387  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:47:17.835437  479667 logs.go:282] 0 containers: []
	W1222 23:47:17.835457  479667 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:47:17.835511  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:47:17.856292  479667 logs.go:282] 1 containers: [d0cd4466a37f]
	I1222 23:47:17.856370  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:47:17.877147  479667 logs.go:282] 0 containers: []
	W1222 23:47:17.877174  479667 logs.go:284] No container was found matching "kindnet"
	I1222 23:47:17.877227  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1222 23:47:17.899031  479667 logs.go:282] 0 containers: []
	W1222 23:47:17.899058  479667 logs.go:284] No container was found matching "storage-provisioner"
	I1222 23:47:17.899080  479667 logs.go:123] Gathering logs for Docker ...
	I1222 23:47:17.899095  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:47:17.924167  479667 logs.go:123] Gathering logs for dmesg ...
	I1222 23:47:17.924196  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:47:17.942501  479667 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:47:17.942526  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:47:18.012095  479667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:47:18.012120  479667 logs.go:123] Gathering logs for kube-apiserver [4db369cb7727] ...
	I1222 23:47:18.012136  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4db369cb7727"
	I1222 23:47:18.055433  479667 logs.go:123] Gathering logs for kube-scheduler [5fdee4dffac9] ...
	I1222 23:47:18.055531  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fdee4dffac9"
	I1222 23:47:18.086375  479667 logs.go:123] Gathering logs for kube-controller-manager [d0cd4466a37f] ...
	I1222 23:47:18.086405  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0cd4466a37f"
	I1222 23:47:18.115435  479667 logs.go:123] Gathering logs for container status ...
	I1222 23:47:18.115465  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:47:18.150205  479667 logs.go:123] Gathering logs for kubelet ...
	I1222 23:47:18.150232  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:47:18.209795  479667 logs.go:123] Gathering logs for etcd [eec44bb2b99c] ...
	I1222 23:47:18.209825  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eec44bb2b99c"
	I1222 23:47:20.742391  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:47:20.753578  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:47:20.772962  479667 logs.go:282] 1 containers: [4db369cb7727]
	I1222 23:47:20.773036  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:47:20.792443  479667 logs.go:282] 1 containers: [eec44bb2b99c]
	I1222 23:47:20.792513  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:47:20.811345  479667 logs.go:282] 0 containers: []
	W1222 23:47:20.811369  479667 logs.go:284] No container was found matching "coredns"
	I1222 23:47:20.811414  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:47:20.830200  479667 logs.go:282] 1 containers: [5fdee4dffac9]
	I1222 23:47:20.830266  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:47:20.849375  479667 logs.go:282] 0 containers: []
	W1222 23:47:20.849400  479667 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:47:20.849452  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:47:20.867588  479667 logs.go:282] 1 containers: [d0cd4466a37f]
	I1222 23:47:20.867676  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:47:20.887227  479667 logs.go:282] 0 containers: []
	W1222 23:47:20.887254  479667 logs.go:284] No container was found matching "kindnet"
	I1222 23:47:20.887313  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1222 23:47:20.906675  479667 logs.go:282] 0 containers: []
	W1222 23:47:20.906701  479667 logs.go:284] No container was found matching "storage-provisioner"
	I1222 23:47:20.906718  479667 logs.go:123] Gathering logs for Docker ...
	I1222 23:47:20.906728  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:47:20.927947  479667 logs.go:123] Gathering logs for container status ...
	I1222 23:47:20.927973  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:47:20.957231  479667 logs.go:123] Gathering logs for kubelet ...
	I1222 23:47:20.957258  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:47:21.004794  479667 logs.go:123] Gathering logs for dmesg ...
	I1222 23:47:21.004826  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:47:21.028517  479667 logs.go:123] Gathering logs for etcd [eec44bb2b99c] ...
	I1222 23:47:21.028552  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eec44bb2b99c"
	I1222 23:47:21.058796  479667 logs.go:123] Gathering logs for kube-scheduler [5fdee4dffac9] ...
	I1222 23:47:21.058824  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fdee4dffac9"
	I1222 23:47:21.085406  479667 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:47:21.085435  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:47:21.139525  479667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:47:21.139543  479667 logs.go:123] Gathering logs for kube-apiserver [4db369cb7727] ...
	I1222 23:47:21.139556  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4db369cb7727"
	I1222 23:47:21.173514  479667 logs.go:123] Gathering logs for kube-controller-manager [d0cd4466a37f] ...
	I1222 23:47:21.173543  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0cd4466a37f"
	I1222 23:47:23.701715  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:47:23.714806  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:47:23.736107  479667 logs.go:282] 1 containers: [4db369cb7727]
	I1222 23:47:23.736183  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:47:23.756602  479667 logs.go:282] 1 containers: [eec44bb2b99c]
	I1222 23:47:23.756691  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:47:23.775959  479667 logs.go:282] 0 containers: []
	W1222 23:47:23.775990  479667 logs.go:284] No container was found matching "coredns"
	I1222 23:47:23.776049  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:47:23.796367  479667 logs.go:282] 1 containers: [5fdee4dffac9]
	I1222 23:47:23.796451  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:47:23.815662  479667 logs.go:282] 0 containers: []
	W1222 23:47:23.815695  479667 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:47:23.815751  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:47:23.834844  479667 logs.go:282] 1 containers: [d0cd4466a37f]
	I1222 23:47:23.834922  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:47:23.854983  479667 logs.go:282] 0 containers: []
	W1222 23:47:23.855010  479667 logs.go:284] No container was found matching "kindnet"
	I1222 23:47:23.855069  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1222 23:47:23.873903  479667 logs.go:282] 0 containers: []
	W1222 23:47:23.873931  479667 logs.go:284] No container was found matching "storage-provisioner"
	I1222 23:47:23.873949  479667 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:47:23.873964  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:47:23.931720  479667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:47:23.931746  479667 logs.go:123] Gathering logs for kube-apiserver [4db369cb7727] ...
	I1222 23:47:23.931763  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4db369cb7727"
	I1222 23:47:23.966152  479667 logs.go:123] Gathering logs for kube-controller-manager [d0cd4466a37f] ...
	I1222 23:47:23.966183  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0cd4466a37f"
	I1222 23:47:23.993910  479667 logs.go:123] Gathering logs for Docker ...
	I1222 23:47:23.993943  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:47:24.020231  479667 logs.go:123] Gathering logs for kubelet ...
	I1222 23:47:24.020273  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:47:24.072992  479667 logs.go:123] Gathering logs for dmesg ...
	I1222 23:47:24.073026  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:47:24.091042  479667 logs.go:123] Gathering logs for etcd [eec44bb2b99c] ...
	I1222 23:47:24.091070  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eec44bb2b99c"
	I1222 23:47:24.119848  479667 logs.go:123] Gathering logs for kube-scheduler [5fdee4dffac9] ...
	I1222 23:47:24.119876  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fdee4dffac9"
	I1222 23:47:24.148476  479667 logs.go:123] Gathering logs for container status ...
	I1222 23:47:24.148507  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:47:26.682057  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:47:26.693004  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:47:26.713191  479667 logs.go:282] 1 containers: [4db369cb7727]
	I1222 23:47:26.713259  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:47:26.733072  479667 logs.go:282] 1 containers: [eec44bb2b99c]
	I1222 23:47:26.733147  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:47:26.761229  479667 logs.go:282] 0 containers: []
	W1222 23:47:26.761258  479667 logs.go:284] No container was found matching "coredns"
	I1222 23:47:26.761311  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:47:26.784126  479667 logs.go:282] 1 containers: [5fdee4dffac9]
	I1222 23:47:26.784205  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:47:26.803116  479667 logs.go:282] 0 containers: []
	W1222 23:47:26.803137  479667 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:47:26.803181  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:47:26.822452  479667 logs.go:282] 1 containers: [d0cd4466a37f]
	I1222 23:47:26.822519  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:47:26.843499  479667 logs.go:282] 0 containers: []
	W1222 23:47:26.843538  479667 logs.go:284] No container was found matching "kindnet"
	I1222 23:47:26.843646  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1222 23:47:26.875416  479667 logs.go:282] 0 containers: []
	W1222 23:47:26.875450  479667 logs.go:284] No container was found matching "storage-provisioner"
	I1222 23:47:26.875471  479667 logs.go:123] Gathering logs for kubelet ...
	I1222 23:47:26.875488  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:47:26.924122  479667 logs.go:123] Gathering logs for dmesg ...
	I1222 23:47:26.941023  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:47:26.968073  479667 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:47:26.968110  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:47:27.029020  479667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:47:27.029047  479667 logs.go:123] Gathering logs for kube-apiserver [4db369cb7727] ...
	I1222 23:47:27.029066  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4db369cb7727"
	I1222 23:47:27.078204  479667 logs.go:123] Gathering logs for kube-controller-manager [d0cd4466a37f] ...
	I1222 23:47:27.078346  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0cd4466a37f"
	I1222 23:47:27.110302  479667 logs.go:123] Gathering logs for Docker ...
	I1222 23:47:27.110333  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:47:27.133958  479667 logs.go:123] Gathering logs for container status ...
	I1222 23:47:27.133988  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:47:27.182765  479667 logs.go:123] Gathering logs for etcd [eec44bb2b99c] ...
	I1222 23:47:27.182793  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eec44bb2b99c"
	I1222 23:47:27.210825  479667 logs.go:123] Gathering logs for kube-scheduler [5fdee4dffac9] ...
	I1222 23:47:27.210849  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fdee4dffac9"
	I1222 23:47:29.739437  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:47:29.755852  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:47:29.776616  479667 logs.go:282] 1 containers: [4db369cb7727]
	I1222 23:47:29.776699  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:47:29.798354  479667 logs.go:282] 1 containers: [eec44bb2b99c]
	I1222 23:47:29.798444  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:47:29.821055  479667 logs.go:282] 0 containers: []
	W1222 23:47:29.821086  479667 logs.go:284] No container was found matching "coredns"
	I1222 23:47:29.821145  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:47:29.840679  479667 logs.go:282] 1 containers: [5fdee4dffac9]
	I1222 23:47:29.840789  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:47:29.885369  479667 logs.go:282] 0 containers: []
	W1222 23:47:29.885393  479667 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:47:29.885438  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:47:29.904894  479667 logs.go:282] 1 containers: [d0cd4466a37f]
	I1222 23:47:29.904970  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:47:29.923658  479667 logs.go:282] 0 containers: []
	W1222 23:47:29.923682  479667 logs.go:284] No container was found matching "kindnet"
	I1222 23:47:29.923739  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1222 23:47:29.946793  479667 logs.go:282] 0 containers: []
	W1222 23:47:29.946827  479667 logs.go:284] No container was found matching "storage-provisioner"
	I1222 23:47:29.946848  479667 logs.go:123] Gathering logs for kubelet ...
	I1222 23:47:29.946868  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:47:30.005632  479667 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:47:30.005669  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:47:30.079624  479667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:47:30.079649  479667 logs.go:123] Gathering logs for etcd [eec44bb2b99c] ...
	I1222 23:47:30.079667  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eec44bb2b99c"
	I1222 23:47:30.108657  479667 logs.go:123] Gathering logs for kube-scheduler [5fdee4dffac9] ...
	I1222 23:47:30.108688  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fdee4dffac9"
	I1222 23:47:30.137231  479667 logs.go:123] Gathering logs for kube-controller-manager [d0cd4466a37f] ...
	I1222 23:47:30.137259  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0cd4466a37f"
	I1222 23:47:30.168143  479667 logs.go:123] Gathering logs for container status ...
	I1222 23:47:30.168172  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:47:30.201338  479667 logs.go:123] Gathering logs for dmesg ...
	I1222 23:47:30.201364  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:47:30.221096  479667 logs.go:123] Gathering logs for kube-apiserver [4db369cb7727] ...
	I1222 23:47:30.221134  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4db369cb7727"
	I1222 23:47:30.254301  479667 logs.go:123] Gathering logs for Docker ...
	I1222 23:47:30.254332  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:47:32.779455  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:47:32.790686  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:47:32.810205  479667 logs.go:282] 1 containers: [4db369cb7727]
	I1222 23:47:32.810279  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:47:32.829400  479667 logs.go:282] 1 containers: [eec44bb2b99c]
	I1222 23:47:32.829462  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:47:32.854005  479667 logs.go:282] 0 containers: []
	W1222 23:47:32.854032  479667 logs.go:284] No container was found matching "coredns"
	I1222 23:47:32.854092  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:47:32.880103  479667 logs.go:282] 1 containers: [5fdee4dffac9]
	I1222 23:47:32.880170  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:47:32.900079  479667 logs.go:282] 0 containers: []
	W1222 23:47:32.900102  479667 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:47:32.900146  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:47:32.919518  479667 logs.go:282] 1 containers: [d0cd4466a37f]
	I1222 23:47:32.919612  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:47:32.938203  479667 logs.go:282] 0 containers: []
	W1222 23:47:32.938224  479667 logs.go:284] No container was found matching "kindnet"
	I1222 23:47:32.938271  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1222 23:47:32.966014  479667 logs.go:282] 0 containers: []
	W1222 23:47:32.966046  479667 logs.go:284] No container was found matching "storage-provisioner"
	I1222 23:47:32.966066  479667 logs.go:123] Gathering logs for Docker ...
	I1222 23:47:32.966099  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:47:32.990407  479667 logs.go:123] Gathering logs for kubelet ...
	I1222 23:47:32.990439  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:47:33.048342  479667 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:47:33.048392  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:47:33.113860  479667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:47:33.113879  479667 logs.go:123] Gathering logs for kube-apiserver [4db369cb7727] ...
	I1222 23:47:33.113893  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4db369cb7727"
	I1222 23:47:33.151089  479667 logs.go:123] Gathering logs for container status ...
	I1222 23:47:33.151132  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:47:33.186906  479667 logs.go:123] Gathering logs for dmesg ...
	I1222 23:47:33.186933  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:47:33.207166  479667 logs.go:123] Gathering logs for etcd [eec44bb2b99c] ...
	I1222 23:47:33.207197  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eec44bb2b99c"
	I1222 23:47:33.234761  479667 logs.go:123] Gathering logs for kube-scheduler [5fdee4dffac9] ...
	I1222 23:47:33.234789  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fdee4dffac9"
	I1222 23:47:33.270240  479667 logs.go:123] Gathering logs for kube-controller-manager [d0cd4466a37f] ...
	I1222 23:47:33.270276  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0cd4466a37f"
	I1222 23:47:35.806441  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:47:35.817989  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:47:35.838052  479667 logs.go:282] 1 containers: [4db369cb7727]
	I1222 23:47:35.838134  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:47:35.857944  479667 logs.go:282] 1 containers: [eec44bb2b99c]
	I1222 23:47:35.858018  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:47:35.876840  479667 logs.go:282] 0 containers: []
	W1222 23:47:35.876863  479667 logs.go:284] No container was found matching "coredns"
	I1222 23:47:35.876909  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:47:35.896042  479667 logs.go:282] 1 containers: [5fdee4dffac9]
	I1222 23:47:35.896108  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:47:35.914217  479667 logs.go:282] 0 containers: []
	W1222 23:47:35.914239  479667 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:47:35.914294  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:47:35.933544  479667 logs.go:282] 1 containers: [d0cd4466a37f]
	I1222 23:47:35.933643  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:47:35.952371  479667 logs.go:282] 0 containers: []
	W1222 23:47:35.952397  479667 logs.go:284] No container was found matching "kindnet"
	I1222 23:47:35.952449  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1222 23:47:35.971286  479667 logs.go:282] 0 containers: []
	W1222 23:47:35.971308  479667 logs.go:284] No container was found matching "storage-provisioner"
	I1222 23:47:35.971331  479667 logs.go:123] Gathering logs for kubelet ...
	I1222 23:47:35.971344  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:47:36.023197  479667 logs.go:123] Gathering logs for dmesg ...
	I1222 23:47:36.023228  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:47:36.043536  479667 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:47:36.043571  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:47:36.100300  479667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:47:36.100321  479667 logs.go:123] Gathering logs for kube-apiserver [4db369cb7727] ...
	I1222 23:47:36.100334  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4db369cb7727"
	I1222 23:47:36.133530  479667 logs.go:123] Gathering logs for etcd [eec44bb2b99c] ...
	I1222 23:47:36.133561  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eec44bb2b99c"
	I1222 23:47:36.161524  479667 logs.go:123] Gathering logs for kube-scheduler [5fdee4dffac9] ...
	I1222 23:47:36.161552  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fdee4dffac9"
	I1222 23:47:36.188713  479667 logs.go:123] Gathering logs for kube-controller-manager [d0cd4466a37f] ...
	I1222 23:47:36.188751  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0cd4466a37f"
	I1222 23:47:36.216985  479667 logs.go:123] Gathering logs for Docker ...
	I1222 23:47:36.217015  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:47:36.238983  479667 logs.go:123] Gathering logs for container status ...
	I1222 23:47:36.239016  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:47:38.770815  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:47:38.782500  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:47:38.803239  479667 logs.go:282] 1 containers: [4db369cb7727]
	I1222 23:47:38.803303  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:47:38.822551  479667 logs.go:282] 1 containers: [eec44bb2b99c]
	I1222 23:47:38.822642  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:47:38.841884  479667 logs.go:282] 0 containers: []
	W1222 23:47:38.841910  479667 logs.go:284] No container was found matching "coredns"
	I1222 23:47:38.841964  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:47:38.862843  479667 logs.go:282] 1 containers: [5fdee4dffac9]
	I1222 23:47:38.862932  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:47:38.881461  479667 logs.go:282] 0 containers: []
	W1222 23:47:38.881490  479667 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:47:38.881547  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:47:38.901057  479667 logs.go:282] 1 containers: [d0cd4466a37f]
	I1222 23:47:38.901137  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:47:38.920273  479667 logs.go:282] 0 containers: []
	W1222 23:47:38.920300  479667 logs.go:284] No container was found matching "kindnet"
	I1222 23:47:38.920372  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1222 23:47:38.940149  479667 logs.go:282] 0 containers: []
	W1222 23:47:38.940179  479667 logs.go:284] No container was found matching "storage-provisioner"
	I1222 23:47:38.940196  479667 logs.go:123] Gathering logs for kubelet ...
	I1222 23:47:38.940207  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:47:38.990691  479667 logs.go:123] Gathering logs for kube-apiserver [4db369cb7727] ...
	I1222 23:47:38.990722  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4db369cb7727"
	I1222 23:47:39.030753  479667 logs.go:123] Gathering logs for kube-controller-manager [d0cd4466a37f] ...
	I1222 23:47:39.030792  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0cd4466a37f"
	I1222 23:47:39.062766  479667 logs.go:123] Gathering logs for container status ...
	I1222 23:47:39.062800  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:47:39.093777  479667 logs.go:123] Gathering logs for dmesg ...
	I1222 23:47:39.093813  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:47:39.111921  479667 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:47:39.111950  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:47:39.167886  479667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:47:39.167912  479667 logs.go:123] Gathering logs for etcd [eec44bb2b99c] ...
	I1222 23:47:39.167926  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eec44bb2b99c"
	I1222 23:47:39.196073  479667 logs.go:123] Gathering logs for kube-scheduler [5fdee4dffac9] ...
	I1222 23:47:39.196107  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fdee4dffac9"
	I1222 23:47:39.224093  479667 logs.go:123] Gathering logs for Docker ...
	I1222 23:47:39.224122  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:47:41.746478  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:47:41.757968  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:47:41.778739  479667 logs.go:282] 1 containers: [4db369cb7727]
	I1222 23:47:41.778820  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:47:41.797801  479667 logs.go:282] 1 containers: [eec44bb2b99c]
	I1222 23:47:41.797875  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:47:41.816914  479667 logs.go:282] 0 containers: []
	W1222 23:47:41.816944  479667 logs.go:284] No container was found matching "coredns"
	I1222 23:47:41.816998  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:47:41.835731  479667 logs.go:282] 1 containers: [5fdee4dffac9]
	I1222 23:47:41.835810  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:47:41.854204  479667 logs.go:282] 0 containers: []
	W1222 23:47:41.854230  479667 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:47:41.854281  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:47:41.873505  479667 logs.go:282] 1 containers: [d0cd4466a37f]
	I1222 23:47:41.873574  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:47:41.892736  479667 logs.go:282] 0 containers: []
	W1222 23:47:41.892758  479667 logs.go:284] No container was found matching "kindnet"
	I1222 23:47:41.892801  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1222 23:47:41.911496  479667 logs.go:282] 0 containers: []
	W1222 23:47:41.911518  479667 logs.go:284] No container was found matching "storage-provisioner"
	I1222 23:47:41.911536  479667 logs.go:123] Gathering logs for kubelet ...
	I1222 23:47:41.911547  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:47:41.959370  479667 logs.go:123] Gathering logs for dmesg ...
	I1222 23:47:41.959401  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:47:41.979299  479667 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:47:41.979328  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:47:42.042453  479667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:47:42.042473  479667 logs.go:123] Gathering logs for kube-apiserver [4db369cb7727] ...
	I1222 23:47:42.042493  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4db369cb7727"
	I1222 23:47:42.079613  479667 logs.go:123] Gathering logs for etcd [eec44bb2b99c] ...
	I1222 23:47:42.079650  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eec44bb2b99c"
	I1222 23:47:42.109164  479667 logs.go:123] Gathering logs for kube-scheduler [5fdee4dffac9] ...
	I1222 23:47:42.109192  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fdee4dffac9"
	I1222 23:47:42.136953  479667 logs.go:123] Gathering logs for kube-controller-manager [d0cd4466a37f] ...
	I1222 23:47:42.136985  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0cd4466a37f"
	I1222 23:47:42.165091  479667 logs.go:123] Gathering logs for Docker ...
	I1222 23:47:42.165122  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:47:42.186759  479667 logs.go:123] Gathering logs for container status ...
	I1222 23:47:42.186789  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:47:44.717428  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:47:44.728969  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:47:44.750057  479667 logs.go:282] 1 containers: [4db369cb7727]
	I1222 23:47:44.750141  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:47:44.770119  479667 logs.go:282] 1 containers: [eec44bb2b99c]
	I1222 23:47:44.770197  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:47:44.788463  479667 logs.go:282] 0 containers: []
	W1222 23:47:44.788491  479667 logs.go:284] No container was found matching "coredns"
	I1222 23:47:44.788536  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:47:44.807849  479667 logs.go:282] 1 containers: [5fdee4dffac9]
	I1222 23:47:44.807928  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:47:44.826969  479667 logs.go:282] 0 containers: []
	W1222 23:47:44.826994  479667 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:47:44.827051  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:47:44.846097  479667 logs.go:282] 1 containers: [d0cd4466a37f]
	I1222 23:47:44.846173  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:47:44.865045  479667 logs.go:282] 0 containers: []
	W1222 23:47:44.865065  479667 logs.go:284] No container was found matching "kindnet"
	I1222 23:47:44.865108  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1222 23:47:44.885423  479667 logs.go:282] 0 containers: []
	W1222 23:47:44.885446  479667 logs.go:284] No container was found matching "storage-provisioner"
	I1222 23:47:44.885463  479667 logs.go:123] Gathering logs for container status ...
	I1222 23:47:44.885478  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:47:44.915330  479667 logs.go:123] Gathering logs for etcd [eec44bb2b99c] ...
	I1222 23:47:44.915356  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eec44bb2b99c"
	I1222 23:47:44.942735  479667 logs.go:123] Gathering logs for kubelet ...
	I1222 23:47:44.942765  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:47:44.994566  479667 logs.go:123] Gathering logs for dmesg ...
	I1222 23:47:44.994604  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:47:45.013340  479667 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:47:45.013370  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:47:45.077280  479667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:47:45.077298  479667 logs.go:123] Gathering logs for kube-apiserver [4db369cb7727] ...
	I1222 23:47:45.077312  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4db369cb7727"
	I1222 23:47:45.115910  479667 logs.go:123] Gathering logs for kube-scheduler [5fdee4dffac9] ...
	I1222 23:47:45.115941  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fdee4dffac9"
	I1222 23:47:45.143763  479667 logs.go:123] Gathering logs for kube-controller-manager [d0cd4466a37f] ...
	I1222 23:47:45.143807  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0cd4466a37f"
	I1222 23:47:45.171906  479667 logs.go:123] Gathering logs for Docker ...
	I1222 23:47:45.171934  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:47:47.695522  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:47:47.709267  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:47:47.733116  479667 logs.go:282] 1 containers: [4db369cb7727]
	I1222 23:47:47.733203  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:47:47.756812  479667 logs.go:282] 1 containers: [eec44bb2b99c]
	I1222 23:47:47.756903  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:47:47.779411  479667 logs.go:282] 0 containers: []
	W1222 23:47:47.779440  479667 logs.go:284] No container was found matching "coredns"
	I1222 23:47:47.779502  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:47:47.802585  479667 logs.go:282] 1 containers: [5fdee4dffac9]
	I1222 23:47:47.802681  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:47:47.824624  479667 logs.go:282] 0 containers: []
	W1222 23:47:47.824653  479667 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:47:47.824711  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:47:47.845352  479667 logs.go:282] 1 containers: [d0cd4466a37f]
	I1222 23:47:47.845436  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:47:47.868314  479667 logs.go:282] 0 containers: []
	W1222 23:47:47.868344  479667 logs.go:284] No container was found matching "kindnet"
	I1222 23:47:47.868407  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1222 23:47:47.891273  479667 logs.go:282] 0 containers: []
	W1222 23:47:47.891300  479667 logs.go:284] No container was found matching "storage-provisioner"
	I1222 23:47:47.891320  479667 logs.go:123] Gathering logs for kubelet ...
	I1222 23:47:47.891333  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:47:47.955054  479667 logs.go:123] Gathering logs for dmesg ...
	I1222 23:47:47.955092  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:47:47.978059  479667 logs.go:123] Gathering logs for etcd [eec44bb2b99c] ...
	I1222 23:47:47.978093  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eec44bb2b99c"
	I1222 23:47:48.013423  479667 logs.go:123] Gathering logs for kube-scheduler [5fdee4dffac9] ...
	I1222 23:47:48.013464  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fdee4dffac9"
	I1222 23:47:48.049368  479667 logs.go:123] Gathering logs for Docker ...
	I1222 23:47:48.049402  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:47:48.080289  479667 logs.go:123] Gathering logs for container status ...
	I1222 23:47:48.080330  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:47:48.117041  479667 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:47:48.117080  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:47:48.183425  479667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:47:48.183450  479667 logs.go:123] Gathering logs for kube-apiserver [4db369cb7727] ...
	I1222 23:47:48.183466  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4db369cb7727"
	I1222 23:47:48.225765  479667 logs.go:123] Gathering logs for kube-controller-manager [d0cd4466a37f] ...
	I1222 23:47:48.225808  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0cd4466a37f"
	I1222 23:47:50.759636  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:47:50.772065  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:47:50.791729  479667 logs.go:282] 1 containers: [4db369cb7727]
	I1222 23:47:50.791789  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:47:50.811116  479667 logs.go:282] 1 containers: [eec44bb2b99c]
	I1222 23:47:50.811207  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:47:50.830069  479667 logs.go:282] 0 containers: []
	W1222 23:47:50.830093  479667 logs.go:284] No container was found matching "coredns"
	I1222 23:47:50.830150  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:47:50.849202  479667 logs.go:282] 1 containers: [5fdee4dffac9]
	I1222 23:47:50.849275  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:47:50.868398  479667 logs.go:282] 0 containers: []
	W1222 23:47:50.868428  479667 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:47:50.868474  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:47:50.887919  479667 logs.go:282] 1 containers: [d0cd4466a37f]
	I1222 23:47:50.887986  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:47:50.907808  479667 logs.go:282] 0 containers: []
	W1222 23:47:50.907832  479667 logs.go:284] No container was found matching "kindnet"
	I1222 23:47:50.907875  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1222 23:47:50.928548  479667 logs.go:282] 0 containers: []
	W1222 23:47:50.928577  479667 logs.go:284] No container was found matching "storage-provisioner"
	I1222 23:47:50.928605  479667 logs.go:123] Gathering logs for kube-scheduler [5fdee4dffac9] ...
	I1222 23:47:50.928621  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fdee4dffac9"
	I1222 23:47:50.954967  479667 logs.go:123] Gathering logs for Docker ...
	I1222 23:47:50.954996  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:47:50.977054  479667 logs.go:123] Gathering logs for etcd [eec44bb2b99c] ...
	I1222 23:47:50.977079  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eec44bb2b99c"
	I1222 23:47:51.005142  479667 logs.go:123] Gathering logs for kube-controller-manager [d0cd4466a37f] ...
	I1222 23:47:51.005177  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0cd4466a37f"
	I1222 23:47:51.036293  479667 logs.go:123] Gathering logs for container status ...
	I1222 23:47:51.036323  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:47:51.066539  479667 logs.go:123] Gathering logs for kubelet ...
	I1222 23:47:51.066575  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:47:51.113912  479667 logs.go:123] Gathering logs for dmesg ...
	I1222 23:47:51.113941  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:47:51.131152  479667 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:47:51.131178  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:47:51.193024  479667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:47:51.193051  479667 logs.go:123] Gathering logs for kube-apiserver [4db369cb7727] ...
	I1222 23:47:51.193068  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4db369cb7727"
	I1222 23:47:53.727106  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:47:53.738768  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:47:53.757955  479667 logs.go:282] 1 containers: [4db369cb7727]
	I1222 23:47:53.758027  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:47:53.776491  479667 logs.go:282] 1 containers: [eec44bb2b99c]
	I1222 23:47:53.776558  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:47:53.795279  479667 logs.go:282] 0 containers: []
	W1222 23:47:53.795300  479667 logs.go:284] No container was found matching "coredns"
	I1222 23:47:53.795350  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:47:53.813799  479667 logs.go:282] 1 containers: [5fdee4dffac9]
	I1222 23:47:53.813865  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:47:53.832183  479667 logs.go:282] 0 containers: []
	W1222 23:47:53.832203  479667 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:47:53.832255  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:47:53.851777  479667 logs.go:282] 1 containers: [d0cd4466a37f]
	I1222 23:47:53.851843  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:47:53.869887  479667 logs.go:282] 0 containers: []
	W1222 23:47:53.869913  479667 logs.go:284] No container was found matching "kindnet"
	I1222 23:47:53.869963  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1222 23:47:53.888520  479667 logs.go:282] 0 containers: []
	W1222 23:47:53.888543  479667 logs.go:284] No container was found matching "storage-provisioner"
	I1222 23:47:53.888559  479667 logs.go:123] Gathering logs for kubelet ...
	I1222 23:47:53.888573  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:47:53.936388  479667 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:47:53.936417  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:47:53.992028  479667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:47:53.992047  479667 logs.go:123] Gathering logs for etcd [eec44bb2b99c] ...
	I1222 23:47:53.992061  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eec44bb2b99c"
	I1222 23:47:54.022309  479667 logs.go:123] Gathering logs for Docker ...
	I1222 23:47:54.022342  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:47:54.045410  479667 logs.go:123] Gathering logs for container status ...
	I1222 23:47:54.045440  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:47:54.075211  479667 logs.go:123] Gathering logs for dmesg ...
	I1222 23:47:54.075235  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:47:54.092715  479667 logs.go:123] Gathering logs for kube-apiserver [4db369cb7727] ...
	I1222 23:47:54.092740  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4db369cb7727"
	I1222 23:47:54.127470  479667 logs.go:123] Gathering logs for kube-scheduler [5fdee4dffac9] ...
	I1222 23:47:54.127498  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fdee4dffac9"
	I1222 23:47:54.155893  479667 logs.go:123] Gathering logs for kube-controller-manager [d0cd4466a37f] ...
	I1222 23:47:54.155920  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0cd4466a37f"
	I1222 23:47:56.686459  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:47:56.698115  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:47:56.717094  479667 logs.go:282] 1 containers: [4db369cb7727]
	I1222 23:47:56.717157  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:47:56.735762  479667 logs.go:282] 1 containers: [eec44bb2b99c]
	I1222 23:47:56.735836  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:47:56.755025  479667 logs.go:282] 0 containers: []
	W1222 23:47:56.755044  479667 logs.go:284] No container was found matching "coredns"
	I1222 23:47:56.755089  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:47:56.774404  479667 logs.go:282] 1 containers: [5fdee4dffac9]
	I1222 23:47:56.774475  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:47:56.792950  479667 logs.go:282] 0 containers: []
	W1222 23:47:56.792975  479667 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:47:56.793032  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:47:56.812106  479667 logs.go:282] 1 containers: [d0cd4466a37f]
	I1222 23:47:56.812180  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:47:56.830643  479667 logs.go:282] 0 containers: []
	W1222 23:47:56.830664  479667 logs.go:284] No container was found matching "kindnet"
	I1222 23:47:56.830708  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1222 23:47:56.851066  479667 logs.go:282] 0 containers: []
	W1222 23:47:56.851090  479667 logs.go:284] No container was found matching "storage-provisioner"
	I1222 23:47:56.851107  479667 logs.go:123] Gathering logs for kube-apiserver [4db369cb7727] ...
	I1222 23:47:56.851120  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4db369cb7727"
	I1222 23:47:56.887286  479667 logs.go:123] Gathering logs for kube-controller-manager [d0cd4466a37f] ...
	I1222 23:47:56.887312  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0cd4466a37f"
	I1222 23:47:56.918047  479667 logs.go:123] Gathering logs for etcd [eec44bb2b99c] ...
	I1222 23:47:56.918076  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eec44bb2b99c"
	I1222 23:47:56.950973  479667 logs.go:123] Gathering logs for kube-scheduler [5fdee4dffac9] ...
	I1222 23:47:56.951011  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fdee4dffac9"
	I1222 23:47:56.983948  479667 logs.go:123] Gathering logs for Docker ...
	I1222 23:47:56.983979  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:47:57.010848  479667 logs.go:123] Gathering logs for container status ...
	I1222 23:47:57.010883  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:47:57.057513  479667 logs.go:123] Gathering logs for kubelet ...
	I1222 23:47:57.057559  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:47:57.109644  479667 logs.go:123] Gathering logs for dmesg ...
	I1222 23:47:57.109672  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:47:57.131246  479667 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:47:57.131275  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:47:57.188984  479667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:47:59.690669  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:47:59.701947  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:47:59.721761  479667 logs.go:282] 1 containers: [4db369cb7727]
	I1222 23:47:59.721832  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:47:59.741511  479667 logs.go:282] 1 containers: [eec44bb2b99c]
	I1222 23:47:59.741576  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:47:59.760659  479667 logs.go:282] 0 containers: []
	W1222 23:47:59.760697  479667 logs.go:284] No container was found matching "coredns"
	I1222 23:47:59.760740  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:47:59.779929  479667 logs.go:282] 1 containers: [5fdee4dffac9]
	I1222 23:47:59.780000  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:47:59.798412  479667 logs.go:282] 0 containers: []
	W1222 23:47:59.798436  479667 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:47:59.798478  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:47:59.817904  479667 logs.go:282] 1 containers: [d0cd4466a37f]
	I1222 23:47:59.817976  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:47:59.837228  479667 logs.go:282] 0 containers: []
	W1222 23:47:59.837254  479667 logs.go:284] No container was found matching "kindnet"
	I1222 23:47:59.837299  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1222 23:47:59.857012  479667 logs.go:282] 0 containers: []
	W1222 23:47:59.857033  479667 logs.go:284] No container was found matching "storage-provisioner"
	I1222 23:47:59.857051  479667 logs.go:123] Gathering logs for dmesg ...
	I1222 23:47:59.857069  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:47:59.874848  479667 logs.go:123] Gathering logs for kube-scheduler [5fdee4dffac9] ...
	I1222 23:47:59.874872  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fdee4dffac9"
	I1222 23:47:59.901966  479667 logs.go:123] Gathering logs for kubelet ...
	I1222 23:47:59.901991  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:47:59.949723  479667 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:47:59.949752  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:48:00.004872  479667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:48:00.004894  479667 logs.go:123] Gathering logs for kube-apiserver [4db369cb7727] ...
	I1222 23:48:00.004908  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4db369cb7727"
	I1222 23:48:00.042244  479667 logs.go:123] Gathering logs for etcd [eec44bb2b99c] ...
	I1222 23:48:00.042278  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eec44bb2b99c"
	I1222 23:48:00.070845  479667 logs.go:123] Gathering logs for kube-controller-manager [d0cd4466a37f] ...
	I1222 23:48:00.070873  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0cd4466a37f"
	I1222 23:48:00.098336  479667 logs.go:123] Gathering logs for Docker ...
	I1222 23:48:00.098362  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:48:00.120417  479667 logs.go:123] Gathering logs for container status ...
	I1222 23:48:00.120442  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:48:02.654006  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:48:02.668630  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:48:02.689586  479667 logs.go:282] 1 containers: [4db369cb7727]
	I1222 23:48:02.689689  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:48:02.709578  479667 logs.go:282] 1 containers: [eec44bb2b99c]
	I1222 23:48:02.709673  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:48:02.728854  479667 logs.go:282] 0 containers: []
	W1222 23:48:02.728886  479667 logs.go:284] No container was found matching "coredns"
	I1222 23:48:02.728930  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:48:02.748022  479667 logs.go:282] 1 containers: [5fdee4dffac9]
	I1222 23:48:02.748086  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:48:02.766201  479667 logs.go:282] 0 containers: []
	W1222 23:48:02.766228  479667 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:48:02.766277  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:48:02.785312  479667 logs.go:282] 1 containers: [d0cd4466a37f]
	I1222 23:48:02.785372  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:48:02.804848  479667 logs.go:282] 0 containers: []
	W1222 23:48:02.804866  479667 logs.go:284] No container was found matching "kindnet"
	I1222 23:48:02.804910  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1222 23:48:02.824293  479667 logs.go:282] 0 containers: []
	W1222 23:48:02.824315  479667 logs.go:284] No container was found matching "storage-provisioner"
	I1222 23:48:02.824329  479667 logs.go:123] Gathering logs for dmesg ...
	I1222 23:48:02.824341  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:48:02.842133  479667 logs.go:123] Gathering logs for etcd [eec44bb2b99c] ...
	I1222 23:48:02.842159  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eec44bb2b99c"
	I1222 23:48:02.869089  479667 logs.go:123] Gathering logs for kube-scheduler [5fdee4dffac9] ...
	I1222 23:48:02.869117  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fdee4dffac9"
	I1222 23:48:02.896622  479667 logs.go:123] Gathering logs for kube-controller-manager [d0cd4466a37f] ...
	I1222 23:48:02.896658  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0cd4466a37f"
	I1222 23:48:02.923533  479667 logs.go:123] Gathering logs for container status ...
	I1222 23:48:02.923561  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:48:02.952671  479667 logs.go:123] Gathering logs for kubelet ...
	I1222 23:48:02.952701  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:48:03.003742  479667 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:48:03.003769  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:48:03.065973  479667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:48:03.065993  479667 logs.go:123] Gathering logs for kube-apiserver [4db369cb7727] ...
	I1222 23:48:03.066005  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4db369cb7727"
	I1222 23:48:03.100294  479667 logs.go:123] Gathering logs for Docker ...
	I1222 23:48:03.100323  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:48:05.623941  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:48:05.634975  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:48:05.655546  479667 logs.go:282] 1 containers: [4db369cb7727]
	I1222 23:48:05.655641  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:48:05.676528  479667 logs.go:282] 1 containers: [eec44bb2b99c]
	I1222 23:48:05.676607  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:48:05.695418  479667 logs.go:282] 0 containers: []
	W1222 23:48:05.695445  479667 logs.go:284] No container was found matching "coredns"
	I1222 23:48:05.695505  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:48:05.714755  479667 logs.go:282] 1 containers: [5fdee4dffac9]
	I1222 23:48:05.714818  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:48:05.733569  479667 logs.go:282] 0 containers: []
	W1222 23:48:05.733609  479667 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:48:05.733660  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:48:05.753729  479667 logs.go:282] 1 containers: [d0cd4466a37f]
	I1222 23:48:05.753803  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:48:05.771940  479667 logs.go:282] 0 containers: []
	W1222 23:48:05.771961  479667 logs.go:284] No container was found matching "kindnet"
	I1222 23:48:05.772014  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1222 23:48:05.787973  479667 logs.go:282] 0 containers: []
	W1222 23:48:05.787992  479667 logs.go:284] No container was found matching "storage-provisioner"
	I1222 23:48:05.788007  479667 logs.go:123] Gathering logs for dmesg ...
	I1222 23:48:05.788018  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:48:05.805253  479667 logs.go:123] Gathering logs for kube-scheduler [5fdee4dffac9] ...
	I1222 23:48:05.805276  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fdee4dffac9"
	I1222 23:48:05.831771  479667 logs.go:123] Gathering logs for Docker ...
	I1222 23:48:05.831798  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:48:05.853520  479667 logs.go:123] Gathering logs for container status ...
	I1222 23:48:05.853544  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:48:05.883158  479667 logs.go:123] Gathering logs for kubelet ...
	I1222 23:48:05.883186  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:48:05.933229  479667 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:48:05.933261  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:48:05.990287  479667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:48:05.990307  479667 logs.go:123] Gathering logs for kube-apiserver [4db369cb7727] ...
	I1222 23:48:05.990325  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4db369cb7727"
	I1222 23:48:06.029147  479667 logs.go:123] Gathering logs for etcd [eec44bb2b99c] ...
	I1222 23:48:06.029177  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eec44bb2b99c"
	I1222 23:48:06.058810  479667 logs.go:123] Gathering logs for kube-controller-manager [d0cd4466a37f] ...
	I1222 23:48:06.058842  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0cd4466a37f"
	I1222 23:48:08.586732  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:48:08.597825  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:48:08.617695  479667 logs.go:282] 1 containers: [4db369cb7727]
	I1222 23:48:08.617775  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:48:08.637081  479667 logs.go:282] 1 containers: [eec44bb2b99c]
	I1222 23:48:08.637157  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:48:08.656234  479667 logs.go:282] 0 containers: []
	W1222 23:48:08.656258  479667 logs.go:284] No container was found matching "coredns"
	I1222 23:48:08.656304  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:48:08.675564  479667 logs.go:282] 1 containers: [5fdee4dffac9]
	I1222 23:48:08.675661  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:48:08.695236  479667 logs.go:282] 0 containers: []
	W1222 23:48:08.695260  479667 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:48:08.695309  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:48:08.713764  479667 logs.go:282] 1 containers: [d0cd4466a37f]
	I1222 23:48:08.713836  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:48:08.732077  479667 logs.go:282] 0 containers: []
	W1222 23:48:08.732100  479667 logs.go:284] No container was found matching "kindnet"
	I1222 23:48:08.732156  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1222 23:48:08.750471  479667 logs.go:282] 0 containers: []
	W1222 23:48:08.750505  479667 logs.go:284] No container was found matching "storage-provisioner"
	I1222 23:48:08.750521  479667 logs.go:123] Gathering logs for kube-scheduler [5fdee4dffac9] ...
	I1222 23:48:08.750536  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fdee4dffac9"
	I1222 23:48:08.776616  479667 logs.go:123] Gathering logs for kube-controller-manager [d0cd4466a37f] ...
	I1222 23:48:08.776643  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0cd4466a37f"
	I1222 23:48:08.802456  479667 logs.go:123] Gathering logs for Docker ...
	I1222 23:48:08.802480  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:48:08.824038  479667 logs.go:123] Gathering logs for container status ...
	I1222 23:48:08.824062  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:48:08.854763  479667 logs.go:123] Gathering logs for dmesg ...
	I1222 23:48:08.854787  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:48:08.872458  479667 logs.go:123] Gathering logs for kubelet ...
	I1222 23:48:08.872481  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:48:08.922248  479667 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:48:08.922276  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:48:08.976488  479667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:48:08.976511  479667 logs.go:123] Gathering logs for kube-apiserver [4db369cb7727] ...
	I1222 23:48:08.976533  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4db369cb7727"
	I1222 23:48:09.009983  479667 logs.go:123] Gathering logs for etcd [eec44bb2b99c] ...
	I1222 23:48:09.010012  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eec44bb2b99c"
	I1222 23:48:11.542700  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:48:11.553834  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:48:11.573466  479667 logs.go:282] 1 containers: [4db369cb7727]
	I1222 23:48:11.573531  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:48:11.593096  479667 logs.go:282] 1 containers: [eec44bb2b99c]
	I1222 23:48:11.593163  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:48:11.611514  479667 logs.go:282] 0 containers: []
	W1222 23:48:11.611545  479667 logs.go:284] No container was found matching "coredns"
	I1222 23:48:11.611605  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:48:11.630656  479667 logs.go:282] 1 containers: [5fdee4dffac9]
	I1222 23:48:11.630721  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:48:11.649212  479667 logs.go:282] 0 containers: []
	W1222 23:48:11.649231  479667 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:48:11.649275  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:48:11.668267  479667 logs.go:282] 1 containers: [d0cd4466a37f]
	I1222 23:48:11.668329  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:48:11.685985  479667 logs.go:282] 0 containers: []
	W1222 23:48:11.686012  479667 logs.go:284] No container was found matching "kindnet"
	I1222 23:48:11.686068  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1222 23:48:11.704325  479667 logs.go:282] 0 containers: []
	W1222 23:48:11.704347  479667 logs.go:284] No container was found matching "storage-provisioner"
	I1222 23:48:11.704366  479667 logs.go:123] Gathering logs for etcd [eec44bb2b99c] ...
	I1222 23:48:11.704379  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eec44bb2b99c"
	I1222 23:48:11.730820  479667 logs.go:123] Gathering logs for container status ...
	I1222 23:48:11.730847  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:48:11.760831  479667 logs.go:123] Gathering logs for kubelet ...
	I1222 23:48:11.760857  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:48:11.810911  479667 logs.go:123] Gathering logs for dmesg ...
	I1222 23:48:11.810938  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:48:11.829774  479667 logs.go:123] Gathering logs for kube-apiserver [4db369cb7727] ...
	I1222 23:48:11.829796  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4db369cb7727"
	I1222 23:48:11.863271  479667 logs.go:123] Gathering logs for kube-scheduler [5fdee4dffac9] ...
	I1222 23:48:11.863299  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fdee4dffac9"
	I1222 23:48:11.891340  479667 logs.go:123] Gathering logs for kube-controller-manager [d0cd4466a37f] ...
	I1222 23:48:11.891371  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0cd4466a37f"
	I1222 23:48:11.918166  479667 logs.go:123] Gathering logs for Docker ...
	I1222 23:48:11.918195  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:48:11.940286  479667 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:48:11.940332  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:48:11.999661  479667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:48:14.500736  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:48:14.512345  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:48:14.533771  479667 logs.go:282] 1 containers: [4db369cb7727]
	I1222 23:48:14.533883  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:48:14.553963  479667 logs.go:282] 1 containers: [eec44bb2b99c]
	I1222 23:48:14.554041  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:48:14.572711  479667 logs.go:282] 0 containers: []
	W1222 23:48:14.572733  479667 logs.go:284] No container was found matching "coredns"
	I1222 23:48:14.572785  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:48:14.591477  479667 logs.go:282] 1 containers: [5fdee4dffac9]
	I1222 23:48:14.591560  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:48:14.609824  479667 logs.go:282] 0 containers: []
	W1222 23:48:14.609850  479667 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:48:14.609907  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:48:14.628794  479667 logs.go:282] 1 containers: [d0cd4466a37f]
	I1222 23:48:14.628867  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:48:14.648773  479667 logs.go:282] 0 containers: []
	W1222 23:48:14.648800  479667 logs.go:284] No container was found matching "kindnet"
	I1222 23:48:14.648846  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1222 23:48:14.671046  479667 logs.go:282] 0 containers: []
	W1222 23:48:14.671071  479667 logs.go:284] No container was found matching "storage-provisioner"
	I1222 23:48:14.671091  479667 logs.go:123] Gathering logs for kubelet ...
	I1222 23:48:14.671104  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:48:14.733147  479667 logs.go:123] Gathering logs for dmesg ...
	I1222 23:48:14.733181  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:48:14.760304  479667 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:48:14.760351  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:48:14.820444  479667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:48:14.820462  479667 logs.go:123] Gathering logs for etcd [eec44bb2b99c] ...
	I1222 23:48:14.820474  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eec44bb2b99c"
	I1222 23:48:14.855345  479667 logs.go:123] Gathering logs for kube-scheduler [5fdee4dffac9] ...
	I1222 23:48:14.855386  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fdee4dffac9"
	I1222 23:48:14.890612  479667 logs.go:123] Gathering logs for Docker ...
	I1222 23:48:14.890639  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:48:14.912818  479667 logs.go:123] Gathering logs for container status ...
	I1222 23:48:14.912842  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:48:14.940638  479667 logs.go:123] Gathering logs for kube-apiserver [4db369cb7727] ...
	I1222 23:48:14.940662  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4db369cb7727"
	I1222 23:48:14.981882  479667 logs.go:123] Gathering logs for kube-controller-manager [d0cd4466a37f] ...
	I1222 23:48:14.981915  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0cd4466a37f"
	I1222 23:48:17.512389  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:48:17.524070  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:48:17.546130  479667 logs.go:282] 1 containers: [4db369cb7727]
	I1222 23:48:17.546210  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:48:17.575173  479667 logs.go:282] 1 containers: [eec44bb2b99c]
	I1222 23:48:17.575271  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:48:17.596546  479667 logs.go:282] 0 containers: []
	W1222 23:48:17.596575  479667 logs.go:284] No container was found matching "coredns"
	I1222 23:48:17.596648  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:48:17.616659  479667 logs.go:282] 1 containers: [5fdee4dffac9]
	I1222 23:48:17.616722  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:48:17.635156  479667 logs.go:282] 0 containers: []
	W1222 23:48:17.635178  479667 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:48:17.635237  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:48:17.662276  479667 logs.go:282] 1 containers: [d0cd4466a37f]
	I1222 23:48:17.662353  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:48:17.684937  479667 logs.go:282] 0 containers: []
	W1222 23:48:17.684967  479667 logs.go:284] No container was found matching "kindnet"
	I1222 23:48:17.685028  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1222 23:48:17.764844  479667 logs.go:282] 0 containers: []
	W1222 23:48:17.764926  479667 logs.go:284] No container was found matching "storage-provisioner"
	I1222 23:48:17.764977  479667 logs.go:123] Gathering logs for dmesg ...
	I1222 23:48:17.765000  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:48:17.785495  479667 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:48:17.785523  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:48:17.840694  479667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:48:17.840727  479667 logs.go:123] Gathering logs for kube-apiserver [4db369cb7727] ...
	I1222 23:48:17.840740  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4db369cb7727"
	I1222 23:48:17.885379  479667 logs.go:123] Gathering logs for kube-scheduler [5fdee4dffac9] ...
	I1222 23:48:17.885409  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fdee4dffac9"
	I1222 23:48:17.912284  479667 logs.go:123] Gathering logs for kube-controller-manager [d0cd4466a37f] ...
	I1222 23:48:17.912311  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0cd4466a37f"
	I1222 23:48:17.938789  479667 logs.go:123] Gathering logs for kubelet ...
	I1222 23:48:17.938815  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:48:18.000539  479667 logs.go:123] Gathering logs for etcd [eec44bb2b99c] ...
	I1222 23:48:18.000571  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eec44bb2b99c"
	I1222 23:48:18.030186  479667 logs.go:123] Gathering logs for Docker ...
	I1222 23:48:18.030216  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:48:18.059460  479667 logs.go:123] Gathering logs for container status ...
	I1222 23:48:18.059500  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:48:20.598727  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:48:20.610311  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:48:20.630114  479667 logs.go:282] 1 containers: [4db369cb7727]
	I1222 23:48:20.630198  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:48:20.653572  479667 logs.go:282] 1 containers: [eec44bb2b99c]
	I1222 23:48:20.653657  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:48:20.678353  479667 logs.go:282] 0 containers: []
	W1222 23:48:20.678381  479667 logs.go:284] No container was found matching "coredns"
	I1222 23:48:20.678438  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:48:20.701407  479667 logs.go:282] 1 containers: [5fdee4dffac9]
	I1222 23:48:20.701494  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:48:20.720665  479667 logs.go:282] 0 containers: []
	W1222 23:48:20.720689  479667 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:48:20.720749  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:48:20.740393  479667 logs.go:282] 1 containers: [d0cd4466a37f]
	I1222 23:48:20.740477  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:48:20.762768  479667 logs.go:282] 0 containers: []
	W1222 23:48:20.762793  479667 logs.go:284] No container was found matching "kindnet"
	I1222 23:48:20.762849  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1222 23:48:20.785758  479667 logs.go:282] 0 containers: []
	W1222 23:48:20.785784  479667 logs.go:284] No container was found matching "storage-provisioner"
	I1222 23:48:20.785801  479667 logs.go:123] Gathering logs for kube-scheduler [5fdee4dffac9] ...
	I1222 23:48:20.785819  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fdee4dffac9"
	I1222 23:48:20.817681  479667 logs.go:123] Gathering logs for container status ...
	I1222 23:48:20.817710  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:48:20.853615  479667 logs.go:123] Gathering logs for kubelet ...
	I1222 23:48:20.853642  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:48:20.914706  479667 logs.go:123] Gathering logs for dmesg ...
	I1222 23:48:20.914735  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:48:20.936157  479667 logs.go:123] Gathering logs for etcd [eec44bb2b99c] ...
	I1222 23:48:20.936192  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eec44bb2b99c"
	I1222 23:48:20.979744  479667 logs.go:123] Gathering logs for kube-controller-manager [d0cd4466a37f] ...
	I1222 23:48:20.979772  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0cd4466a37f"
	I1222 23:48:21.013547  479667 logs.go:123] Gathering logs for Docker ...
	I1222 23:48:21.013576  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:48:21.037708  479667 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:48:21.037736  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:48:21.106670  479667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:48:21.106697  479667 logs.go:123] Gathering logs for kube-apiserver [4db369cb7727] ...
	I1222 23:48:21.106714  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4db369cb7727"
	I1222 23:48:23.643754  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:48:23.661360  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:48:23.690810  479667 logs.go:282] 1 containers: [4db369cb7727]
	I1222 23:48:23.690932  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:48:23.716486  479667 logs.go:282] 1 containers: [eec44bb2b99c]
	I1222 23:48:23.716559  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:48:23.743522  479667 logs.go:282] 0 containers: []
	W1222 23:48:23.743665  479667 logs.go:284] No container was found matching "coredns"
	I1222 23:48:23.743756  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:48:23.768940  479667 logs.go:282] 1 containers: [5fdee4dffac9]
	I1222 23:48:23.769018  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:48:23.793886  479667 logs.go:282] 0 containers: []
	W1222 23:48:23.793917  479667 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:48:23.793976  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:48:23.817611  479667 logs.go:282] 1 containers: [d0cd4466a37f]
	I1222 23:48:23.817686  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:48:23.836310  479667 logs.go:282] 0 containers: []
	W1222 23:48:23.836338  479667 logs.go:284] No container was found matching "kindnet"
	I1222 23:48:23.836404  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1222 23:48:23.869887  479667 logs.go:282] 0 containers: []
	W1222 23:48:23.869983  479667 logs.go:284] No container was found matching "storage-provisioner"
	I1222 23:48:23.870016  479667 logs.go:123] Gathering logs for container status ...
	I1222 23:48:23.870032  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:48:23.906865  479667 logs.go:123] Gathering logs for etcd [eec44bb2b99c] ...
	I1222 23:48:23.906902  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eec44bb2b99c"
	I1222 23:48:23.940795  479667 logs.go:123] Gathering logs for Docker ...
	I1222 23:48:23.940827  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:48:23.982992  479667 logs.go:123] Gathering logs for kubelet ...
	I1222 23:48:23.983024  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:48:24.101549  479667 logs.go:123] Gathering logs for dmesg ...
	I1222 23:48:24.101603  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:48:24.124240  479667 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:48:24.124280  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:48:24.203265  479667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:48:24.203291  479667 logs.go:123] Gathering logs for kube-apiserver [4db369cb7727] ...
	I1222 23:48:24.203308  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4db369cb7727"
	I1222 23:48:24.237299  479667 logs.go:123] Gathering logs for kube-scheduler [5fdee4dffac9] ...
	I1222 23:48:24.237331  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fdee4dffac9"
	I1222 23:48:24.285501  479667 logs.go:123] Gathering logs for kube-controller-manager [d0cd4466a37f] ...
	I1222 23:48:24.285538  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0cd4466a37f"
	I1222 23:48:26.827689  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:48:26.840195  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:48:26.862893  479667 logs.go:282] 1 containers: [4db369cb7727]
	I1222 23:48:26.862972  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:48:26.890315  479667 logs.go:282] 1 containers: [eec44bb2b99c]
	I1222 23:48:26.890428  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:48:26.914163  479667 logs.go:282] 0 containers: []
	W1222 23:48:26.914195  479667 logs.go:284] No container was found matching "coredns"
	I1222 23:48:26.914273  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:48:26.939792  479667 logs.go:282] 1 containers: [5fdee4dffac9]
	I1222 23:48:26.939870  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:48:26.962303  479667 logs.go:282] 0 containers: []
	W1222 23:48:26.962327  479667 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:48:26.962377  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:48:26.987371  479667 logs.go:282] 1 containers: [d0cd4466a37f]
	I1222 23:48:26.987447  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:48:27.011214  479667 logs.go:282] 0 containers: []
	W1222 23:48:27.011245  479667 logs.go:284] No container was found matching "kindnet"
	I1222 23:48:27.011306  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1222 23:48:27.038149  479667 logs.go:282] 0 containers: []
	W1222 23:48:27.038436  479667 logs.go:284] No container was found matching "storage-provisioner"
	I1222 23:48:27.038511  479667 logs.go:123] Gathering logs for etcd [eec44bb2b99c] ...
	I1222 23:48:27.038533  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eec44bb2b99c"
	I1222 23:48:27.076099  479667 logs.go:123] Gathering logs for kube-controller-manager [d0cd4466a37f] ...
	I1222 23:48:27.076189  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0cd4466a37f"
	I1222 23:48:27.110925  479667 logs.go:123] Gathering logs for kubelet ...
	I1222 23:48:27.110953  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:48:27.178625  479667 logs.go:123] Gathering logs for dmesg ...
	I1222 23:48:27.178667  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:48:27.200380  479667 logs.go:123] Gathering logs for kube-apiserver [4db369cb7727] ...
	I1222 23:48:27.200405  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4db369cb7727"
	I1222 23:48:27.244675  479667 logs.go:123] Gathering logs for kube-scheduler [5fdee4dffac9] ...
	I1222 23:48:27.244712  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fdee4dffac9"
	I1222 23:48:27.283412  479667 logs.go:123] Gathering logs for Docker ...
	I1222 23:48:27.283446  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:48:27.313824  479667 logs.go:123] Gathering logs for container status ...
	I1222 23:48:27.313854  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:48:27.351068  479667 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:48:27.351103  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:48:27.418376  479667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:48:29.918582  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:48:29.929767  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:48:29.954960  479667 logs.go:282] 1 containers: [4db369cb7727]
	I1222 23:48:29.955032  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:48:29.984100  479667 logs.go:282] 1 containers: [eec44bb2b99c]
	I1222 23:48:29.984173  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:48:30.011354  479667 logs.go:282] 0 containers: []
	W1222 23:48:30.011403  479667 logs.go:284] No container was found matching "coredns"
	I1222 23:48:30.011459  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:48:30.039097  479667 logs.go:282] 1 containers: [5fdee4dffac9]
	I1222 23:48:30.039189  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:48:30.067646  479667 logs.go:282] 0 containers: []
	W1222 23:48:30.067669  479667 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:48:30.067720  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:48:30.089339  479667 logs.go:282] 1 containers: [d0cd4466a37f]
	I1222 23:48:30.089407  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:48:30.108878  479667 logs.go:282] 0 containers: []
	W1222 23:48:30.108914  479667 logs.go:284] No container was found matching "kindnet"
	I1222 23:48:30.108981  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1222 23:48:30.134072  479667 logs.go:282] 0 containers: []
	W1222 23:48:30.134102  479667 logs.go:284] No container was found matching "storage-provisioner"
	I1222 23:48:30.134124  479667 logs.go:123] Gathering logs for kube-scheduler [5fdee4dffac9] ...
	I1222 23:48:30.134139  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fdee4dffac9"
	I1222 23:48:30.166199  479667 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:48:30.166239  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:48:30.223567  479667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:48:30.223604  479667 logs.go:123] Gathering logs for etcd [eec44bb2b99c] ...
	I1222 23:48:30.223622  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eec44bb2b99c"
	I1222 23:48:30.255623  479667 logs.go:123] Gathering logs for kube-controller-manager [d0cd4466a37f] ...
	I1222 23:48:30.255659  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0cd4466a37f"
	I1222 23:48:30.287821  479667 logs.go:123] Gathering logs for Docker ...
	I1222 23:48:30.287856  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:48:30.310327  479667 logs.go:123] Gathering logs for container status ...
	I1222 23:48:30.310354  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:48:30.353444  479667 logs.go:123] Gathering logs for kubelet ...
	I1222 23:48:30.353484  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:48:30.412875  479667 logs.go:123] Gathering logs for dmesg ...
	I1222 23:48:30.412916  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:48:30.431369  479667 logs.go:123] Gathering logs for kube-apiserver [4db369cb7727] ...
	I1222 23:48:30.431396  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4db369cb7727"
	I1222 23:48:32.966700  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:48:32.977791  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:48:32.999857  479667 logs.go:282] 1 containers: [4db369cb7727]
	I1222 23:48:32.999934  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:48:33.025461  479667 logs.go:282] 1 containers: [eec44bb2b99c]
	I1222 23:48:33.025544  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:48:33.049099  479667 logs.go:282] 0 containers: []
	W1222 23:48:33.049119  479667 logs.go:284] No container was found matching "coredns"
	I1222 23:48:33.049162  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:48:33.068846  479667 logs.go:282] 1 containers: [5fdee4dffac9]
	I1222 23:48:33.068928  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:48:33.087744  479667 logs.go:282] 0 containers: []
	W1222 23:48:33.087770  479667 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:48:33.087822  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:48:33.106447  479667 logs.go:282] 1 containers: [d0cd4466a37f]
	I1222 23:48:33.106530  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:48:33.124845  479667 logs.go:282] 0 containers: []
	W1222 23:48:33.124869  479667 logs.go:284] No container was found matching "kindnet"
	I1222 23:48:33.124911  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1222 23:48:33.146392  479667 logs.go:282] 0 containers: []
	W1222 23:48:33.146419  479667 logs.go:284] No container was found matching "storage-provisioner"
	I1222 23:48:33.146447  479667 logs.go:123] Gathering logs for kube-controller-manager [d0cd4466a37f] ...
	I1222 23:48:33.146470  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0cd4466a37f"
	I1222 23:48:33.176225  479667 logs.go:123] Gathering logs for kubelet ...
	I1222 23:48:33.176253  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:48:33.224880  479667 logs.go:123] Gathering logs for dmesg ...
	I1222 23:48:33.224908  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:48:33.244038  479667 logs.go:123] Gathering logs for kube-scheduler [5fdee4dffac9] ...
	I1222 23:48:33.244064  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fdee4dffac9"
	I1222 23:48:33.272009  479667 logs.go:123] Gathering logs for Docker ...
	I1222 23:48:33.272034  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:48:33.296444  479667 logs.go:123] Gathering logs for container status ...
	I1222 23:48:33.296471  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:48:33.326758  479667 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:48:33.326785  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:48:33.381540  479667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:48:33.381561  479667 logs.go:123] Gathering logs for kube-apiserver [4db369cb7727] ...
	I1222 23:48:33.381575  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4db369cb7727"
	I1222 23:48:33.415916  479667 logs.go:123] Gathering logs for etcd [eec44bb2b99c] ...
	I1222 23:48:33.415941  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eec44bb2b99c"
	I1222 23:48:35.947712  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:48:35.958567  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:48:35.979185  479667 logs.go:282] 1 containers: [4db369cb7727]
	I1222 23:48:35.979252  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:48:36.002063  479667 logs.go:282] 1 containers: [eec44bb2b99c]
	I1222 23:48:36.002134  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:48:36.024202  479667 logs.go:282] 0 containers: []
	W1222 23:48:36.024224  479667 logs.go:284] No container was found matching "coredns"
	I1222 23:48:36.024275  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:48:36.050630  479667 logs.go:282] 1 containers: [5fdee4dffac9]
	I1222 23:48:36.050703  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:48:36.069614  479667 logs.go:282] 0 containers: []
	W1222 23:48:36.069635  479667 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:48:36.069686  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:48:36.090073  479667 logs.go:282] 1 containers: [d0cd4466a37f]
	I1222 23:48:36.090156  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:48:36.111502  479667 logs.go:282] 0 containers: []
	W1222 23:48:36.111535  479667 logs.go:284] No container was found matching "kindnet"
	I1222 23:48:36.111589  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1222 23:48:36.131991  479667 logs.go:282] 0 containers: []
	W1222 23:48:36.132016  479667 logs.go:284] No container was found matching "storage-provisioner"
	I1222 23:48:36.132037  479667 logs.go:123] Gathering logs for dmesg ...
	I1222 23:48:36.132054  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:48:36.156617  479667 logs.go:123] Gathering logs for kube-apiserver [4db369cb7727] ...
	I1222 23:48:36.156658  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4db369cb7727"
	I1222 23:48:36.198941  479667 logs.go:123] Gathering logs for kube-controller-manager [d0cd4466a37f] ...
	I1222 23:48:36.198972  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0cd4466a37f"
	I1222 23:48:36.228140  479667 logs.go:123] Gathering logs for container status ...
	I1222 23:48:36.228172  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:48:36.262229  479667 logs.go:123] Gathering logs for kubelet ...
	I1222 23:48:36.262269  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:48:36.317468  479667 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:48:36.317503  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:48:36.383744  479667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:48:36.383765  479667 logs.go:123] Gathering logs for etcd [eec44bb2b99c] ...
	I1222 23:48:36.383782  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eec44bb2b99c"
	I1222 23:48:36.413864  479667 logs.go:123] Gathering logs for kube-scheduler [5fdee4dffac9] ...
	I1222 23:48:36.413893  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fdee4dffac9"
	I1222 23:48:36.440504  479667 logs.go:123] Gathering logs for Docker ...
	I1222 23:48:36.440533  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:48:38.962739  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:48:38.973856  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:48:38.996275  479667 logs.go:282] 1 containers: [4db369cb7727]
	I1222 23:48:38.996358  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:48:39.023046  479667 logs.go:282] 1 containers: [eec44bb2b99c]
	I1222 23:48:39.023132  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:48:39.046487  479667 logs.go:282] 0 containers: []
	W1222 23:48:39.046512  479667 logs.go:284] No container was found matching "coredns"
	I1222 23:48:39.046556  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:48:39.068779  479667 logs.go:282] 1 containers: [5fdee4dffac9]
	I1222 23:48:39.068868  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:48:39.089456  479667 logs.go:282] 0 containers: []
	W1222 23:48:39.089484  479667 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:48:39.089554  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:48:39.109125  479667 logs.go:282] 1 containers: [d0cd4466a37f]
	I1222 23:48:39.109205  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:48:39.127438  479667 logs.go:282] 0 containers: []
	W1222 23:48:39.127463  479667 logs.go:284] No container was found matching "kindnet"
	I1222 23:48:39.127516  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1222 23:48:39.147308  479667 logs.go:282] 0 containers: []
	W1222 23:48:39.147332  479667 logs.go:284] No container was found matching "storage-provisioner"
	I1222 23:48:39.147350  479667 logs.go:123] Gathering logs for kube-apiserver [4db369cb7727] ...
	I1222 23:48:39.147362  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4db369cb7727"
	I1222 23:48:39.180486  479667 logs.go:123] Gathering logs for etcd [eec44bb2b99c] ...
	I1222 23:48:39.180514  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eec44bb2b99c"
	I1222 23:48:39.209029  479667 logs.go:123] Gathering logs for kube-controller-manager [d0cd4466a37f] ...
	I1222 23:48:39.209059  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0cd4466a37f"
	I1222 23:48:39.241688  479667 logs.go:123] Gathering logs for container status ...
	I1222 23:48:39.241722  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:48:39.279327  479667 logs.go:123] Gathering logs for kubelet ...
	I1222 23:48:39.279360  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:48:39.329605  479667 logs.go:123] Gathering logs for dmesg ...
	I1222 23:48:39.329633  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:48:39.349820  479667 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:48:39.349852  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:48:39.407756  479667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:48:39.407781  479667 logs.go:123] Gathering logs for kube-scheduler [5fdee4dffac9] ...
	I1222 23:48:39.407797  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fdee4dffac9"
	I1222 23:48:39.436273  479667 logs.go:123] Gathering logs for Docker ...
	I1222 23:48:39.436301  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:48:41.959548  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:48:41.970762  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:48:41.993196  479667 logs.go:282] 1 containers: [4db369cb7727]
	I1222 23:48:41.993260  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:48:42.018360  479667 logs.go:282] 1 containers: [eec44bb2b99c]
	I1222 23:48:42.018436  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:48:42.041609  479667 logs.go:282] 0 containers: []
	W1222 23:48:42.041644  479667 logs.go:284] No container was found matching "coredns"
	I1222 23:48:42.041695  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:48:42.065571  479667 logs.go:282] 1 containers: [5fdee4dffac9]
	I1222 23:48:42.065663  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:48:42.086121  479667 logs.go:282] 0 containers: []
	W1222 23:48:42.086142  479667 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:48:42.086187  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:48:42.106861  479667 logs.go:282] 1 containers: [d0cd4466a37f]
	I1222 23:48:42.106925  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:48:42.127256  479667 logs.go:282] 0 containers: []
	W1222 23:48:42.127281  479667 logs.go:284] No container was found matching "kindnet"
	I1222 23:48:42.127337  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1222 23:48:42.148139  479667 logs.go:282] 0 containers: []
	W1222 23:48:42.148162  479667 logs.go:284] No container was found matching "storage-provisioner"
	I1222 23:48:42.148180  479667 logs.go:123] Gathering logs for container status ...
	I1222 23:48:42.148194  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:48:42.179726  479667 logs.go:123] Gathering logs for kubelet ...
	I1222 23:48:42.179755  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:48:42.226798  479667 logs.go:123] Gathering logs for dmesg ...
	I1222 23:48:42.226826  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:48:42.244889  479667 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:48:42.244919  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:48:42.306880  479667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:48:42.306902  479667 logs.go:123] Gathering logs for etcd [eec44bb2b99c] ...
	I1222 23:48:42.306919  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eec44bb2b99c"
	I1222 23:48:42.337435  479667 logs.go:123] Gathering logs for kube-controller-manager [d0cd4466a37f] ...
	I1222 23:48:42.337465  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0cd4466a37f"
	I1222 23:48:42.365876  479667 logs.go:123] Gathering logs for Docker ...
	I1222 23:48:42.365903  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:48:42.388021  479667 logs.go:123] Gathering logs for kube-apiserver [4db369cb7727] ...
	I1222 23:48:42.388046  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4db369cb7727"
	I1222 23:48:42.425934  479667 logs.go:123] Gathering logs for kube-scheduler [5fdee4dffac9] ...
	I1222 23:48:42.425960  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fdee4dffac9"
	I1222 23:48:44.955699  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:48:44.968579  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:48:44.991823  479667 logs.go:282] 1 containers: [4db369cb7727]
	I1222 23:48:44.991917  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:48:45.021932  479667 logs.go:282] 1 containers: [eec44bb2b99c]
	I1222 23:48:45.022022  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:48:45.052270  479667 logs.go:282] 0 containers: []
	W1222 23:48:45.052298  479667 logs.go:284] No container was found matching "coredns"
	I1222 23:48:45.052356  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:48:45.074756  479667 logs.go:282] 1 containers: [5fdee4dffac9]
	I1222 23:48:45.074830  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:48:45.093243  479667 logs.go:282] 0 containers: []
	W1222 23:48:45.093264  479667 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:48:45.093308  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:48:45.113386  479667 logs.go:282] 1 containers: [d0cd4466a37f]
	I1222 23:48:45.113464  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:48:45.136976  479667 logs.go:282] 0 containers: []
	W1222 23:48:45.137000  479667 logs.go:284] No container was found matching "kindnet"
	I1222 23:48:45.137054  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1222 23:48:45.160084  479667 logs.go:282] 0 containers: []
	W1222 23:48:45.160113  479667 logs.go:284] No container was found matching "storage-provisioner"
	I1222 23:48:45.160133  479667 logs.go:123] Gathering logs for dmesg ...
	I1222 23:48:45.160154  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:48:45.179655  479667 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:48:45.179692  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:48:45.239030  479667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:48:45.239059  479667 logs.go:123] Gathering logs for etcd [eec44bb2b99c] ...
	I1222 23:48:45.239078  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eec44bb2b99c"
	I1222 23:48:45.274300  479667 logs.go:123] Gathering logs for kube-controller-manager [d0cd4466a37f] ...
	I1222 23:48:45.274342  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0cd4466a37f"
	I1222 23:48:45.302809  479667 logs.go:123] Gathering logs for kubelet ...
	I1222 23:48:45.302839  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:48:45.361941  479667 logs.go:123] Gathering logs for kube-apiserver [4db369cb7727] ...
	I1222 23:48:45.361969  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4db369cb7727"
	I1222 23:48:45.398139  479667 logs.go:123] Gathering logs for kube-scheduler [5fdee4dffac9] ...
	I1222 23:48:45.398167  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fdee4dffac9"
	I1222 23:48:45.431212  479667 logs.go:123] Gathering logs for Docker ...
	I1222 23:48:45.431245  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:48:45.462304  479667 logs.go:123] Gathering logs for container status ...
	I1222 23:48:45.462340  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:48:47.999806  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:48:48.014320  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:48:48.042622  479667 logs.go:282] 1 containers: [4db369cb7727]
	I1222 23:48:48.042715  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:48:48.070394  479667 logs.go:282] 1 containers: [eec44bb2b99c]
	I1222 23:48:48.070473  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:48:48.092087  479667 logs.go:282] 0 containers: []
	W1222 23:48:48.092109  479667 logs.go:284] No container was found matching "coredns"
	I1222 23:48:48.092174  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:48:48.121026  479667 logs.go:282] 1 containers: [5fdee4dffac9]
	I1222 23:48:48.121112  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:48:48.143975  479667 logs.go:282] 0 containers: []
	W1222 23:48:48.143996  479667 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:48:48.144043  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:48:48.165047  479667 logs.go:282] 1 containers: [d0cd4466a37f]
	I1222 23:48:48.165125  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:48:48.183759  479667 logs.go:282] 0 containers: []
	W1222 23:48:48.183780  479667 logs.go:284] No container was found matching "kindnet"
	I1222 23:48:48.183825  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1222 23:48:48.203039  479667 logs.go:282] 0 containers: []
	W1222 23:48:48.203065  479667 logs.go:284] No container was found matching "storage-provisioner"
	I1222 23:48:48.203082  479667 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:48:48.203097  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:48:48.272682  479667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:48:48.272704  479667 logs.go:123] Gathering logs for etcd [eec44bb2b99c] ...
	I1222 23:48:48.272722  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eec44bb2b99c"
	I1222 23:48:48.302560  479667 logs.go:123] Gathering logs for Docker ...
	I1222 23:48:48.302588  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:48:48.331202  479667 logs.go:123] Gathering logs for kube-apiserver [4db369cb7727] ...
	I1222 23:48:48.331237  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4db369cb7727"
	I1222 23:48:48.371162  479667 logs.go:123] Gathering logs for kube-scheduler [5fdee4dffac9] ...
	I1222 23:48:48.371199  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fdee4dffac9"
	I1222 23:48:48.405510  479667 logs.go:123] Gathering logs for kube-controller-manager [d0cd4466a37f] ...
	I1222 23:48:48.405548  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0cd4466a37f"
	I1222 23:48:48.448319  479667 logs.go:123] Gathering logs for container status ...
	I1222 23:48:48.448366  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:48:48.488536  479667 logs.go:123] Gathering logs for kubelet ...
	I1222 23:48:48.488569  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:48:48.558424  479667 logs.go:123] Gathering logs for dmesg ...
	I1222 23:48:48.558457  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:48:51.078730  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:48:51.090914  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1222 23:48:51.112374  479667 logs.go:282] 1 containers: [4db369cb7727]
	I1222 23:48:51.112458  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1222 23:48:51.134522  479667 logs.go:282] 1 containers: [eec44bb2b99c]
	I1222 23:48:51.134619  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1222 23:48:51.154524  479667 logs.go:282] 0 containers: []
	W1222 23:48:51.154549  479667 logs.go:284] No container was found matching "coredns"
	I1222 23:48:51.154636  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1222 23:48:51.173233  479667 logs.go:282] 1 containers: [5fdee4dffac9]
	I1222 23:48:51.173312  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1222 23:48:51.193722  479667 logs.go:282] 0 containers: []
	W1222 23:48:51.193754  479667 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:48:51.193808  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1222 23:48:51.212741  479667 logs.go:282] 1 containers: [d0cd4466a37f]
	I1222 23:48:51.212805  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1222 23:48:51.233296  479667 logs.go:282] 0 containers: []
	W1222 23:48:51.233321  479667 logs.go:284] No container was found matching "kindnet"
	I1222 23:48:51.233374  479667 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1222 23:48:51.252784  479667 logs.go:282] 0 containers: []
	W1222 23:48:51.252813  479667 logs.go:284] No container was found matching "storage-provisioner"
	I1222 23:48:51.252833  479667 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:48:51.252847  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:48:51.313494  479667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:48:51.313518  479667 logs.go:123] Gathering logs for kube-scheduler [5fdee4dffac9] ...
	I1222 23:48:51.313533  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fdee4dffac9"
	I1222 23:48:51.344260  479667 logs.go:123] Gathering logs for kube-controller-manager [d0cd4466a37f] ...
	I1222 23:48:51.344294  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0cd4466a37f"
	I1222 23:48:51.371993  479667 logs.go:123] Gathering logs for Docker ...
	I1222 23:48:51.372028  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:48:51.399738  479667 logs.go:123] Gathering logs for container status ...
	I1222 23:48:51.399767  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:48:51.435179  479667 logs.go:123] Gathering logs for kube-apiserver [4db369cb7727] ...
	I1222 23:48:51.435212  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4db369cb7727"
	I1222 23:48:51.467840  479667 logs.go:123] Gathering logs for etcd [eec44bb2b99c] ...
	I1222 23:48:51.467870  479667 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eec44bb2b99c"
	I1222 23:48:51.496304  479667 logs.go:123] Gathering logs for kubelet ...
	I1222 23:48:51.496336  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:48:51.549109  479667 logs.go:123] Gathering logs for dmesg ...
	I1222 23:48:51.549140  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:48:54.072722  479667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:48:54.086539  479667 kubeadm.go:602] duration metric: took 4m4.163951088s to restartPrimaryControlPlane
	W1222 23:48:54.086647  479667 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1222 23:48:54.086718  479667 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1222 23:48:54.580165  479667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 23:48:54.594472  479667 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 23:48:54.603234  479667 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 23:48:54.603282  479667 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 23:48:54.613090  479667 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 23:48:54.613109  479667 kubeadm.go:158] found existing configuration files:
	
	I1222 23:48:54.613157  479667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1222 23:48:54.626050  479667 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 23:48:54.626645  479667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 23:48:54.638156  479667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1222 23:48:54.648068  479667 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 23:48:54.648139  479667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 23:48:54.656325  479667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1222 23:48:54.663741  479667 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 23:48:54.663792  479667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 23:48:54.671019  479667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1222 23:48:54.678767  479667 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 23:48:54.678815  479667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 23:48:54.686734  479667 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 23:48:54.732374  479667 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 23:48:54.732452  479667 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 23:48:54.816274  479667 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 23:48:54.816376  479667 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1222 23:48:54.816421  479667 kubeadm.go:319] OS: Linux
	I1222 23:48:54.816480  479667 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 23:48:54.816542  479667 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 23:48:54.816615  479667 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 23:48:54.816682  479667 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 23:48:54.816745  479667 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 23:48:54.816806  479667 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 23:48:54.816866  479667 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 23:48:54.816927  479667 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 23:48:54.816976  479667 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 23:48:54.885100  479667 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 23:48:54.885265  479667 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 23:48:54.885408  479667 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 23:48:54.897110  479667 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 23:48:54.899806  479667 out.go:252]   - Generating certificates and keys ...
	I1222 23:48:54.899898  479667 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 23:48:54.899998  479667 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 23:48:54.900094  479667 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1222 23:48:54.900185  479667 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1222 23:48:54.900255  479667 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1222 23:48:54.900303  479667 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1222 23:48:54.900387  479667 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1222 23:48:54.900489  479667 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1222 23:48:54.900610  479667 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1222 23:48:54.900721  479667 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1222 23:48:54.900773  479667 kubeadm.go:319] [certs] Using the existing "sa" key
	I1222 23:48:54.900855  479667 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 23:48:55.068259  479667 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 23:48:55.308769  479667 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 23:48:55.382277  479667 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 23:48:55.524463  479667 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 23:48:55.620975  479667 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 23:48:55.621213  479667 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 23:48:55.625686  479667 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 23:48:55.627055  479667 out.go:252]   - Booting up control plane ...
	I1222 23:48:55.627177  479667 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 23:48:55.628724  479667 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 23:48:55.630330  479667 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 23:48:55.653702  479667 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 23:48:55.653830  479667 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 23:48:55.662070  479667 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 23:48:55.662331  479667 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 23:48:55.662399  479667 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 23:48:55.798366  479667 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 23:48:55.798508  479667 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 23:52:55.799204  479667 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00092633s
	I1222 23:52:55.799249  479667 kubeadm.go:319] 
	I1222 23:52:55.799361  479667 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 23:52:55.799442  479667 kubeadm.go:319] 	- The kubelet is not running
	I1222 23:52:55.799575  479667 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 23:52:55.799614  479667 kubeadm.go:319] 
	I1222 23:52:55.799741  479667 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 23:52:55.799797  479667 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 23:52:55.799840  479667 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 23:52:55.799850  479667 kubeadm.go:319] 
	I1222 23:52:55.802576  479667 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1222 23:52:55.803088  479667 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 23:52:55.803203  479667 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 23:52:55.803450  479667 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1222 23:52:55.803466  479667 kubeadm.go:319] 
	I1222 23:52:55.803535  479667 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1222 23:52:55.803708  479667 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00092633s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00092633s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1222 23:52:55.803782  479667 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1222 23:52:56.222389  479667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 23:52:56.235221  479667 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 23:52:56.235272  479667 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 23:52:56.243164  479667 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 23:52:56.243179  479667 kubeadm.go:158] found existing configuration files:
	
	I1222 23:52:56.243220  479667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1222 23:52:56.250726  479667 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 23:52:56.250783  479667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 23:52:56.257796  479667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1222 23:52:56.265261  479667 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 23:52:56.265320  479667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 23:52:56.272730  479667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1222 23:52:56.279960  479667 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 23:52:56.280000  479667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 23:52:56.286875  479667 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1222 23:52:56.294176  479667 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 23:52:56.294229  479667 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 23:52:56.301015  479667 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 23:52:56.411136  479667 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1222 23:52:56.411737  479667 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 23:52:56.478456  479667 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 23:56:57.270461  479667 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1222 23:56:57.270514  479667 kubeadm.go:319] 
	I1222 23:56:57.270645  479667 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1222 23:56:57.273257  479667 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 23:56:57.273339  479667 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 23:56:57.273488  479667 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 23:56:57.273575  479667 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1222 23:56:57.273634  479667 kubeadm.go:319] OS: Linux
	I1222 23:56:57.273696  479667 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 23:56:57.273758  479667 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 23:56:57.273818  479667 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 23:56:57.273878  479667 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 23:56:57.273944  479667 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 23:56:57.274185  479667 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 23:56:57.274254  479667 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 23:56:57.274353  479667 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 23:56:57.274433  479667 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 23:56:57.274534  479667 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 23:56:57.274730  479667 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 23:56:57.274849  479667 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 23:56:57.274921  479667 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 23:56:57.276636  479667 out.go:252]   - Generating certificates and keys ...
	I1222 23:56:57.276741  479667 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 23:56:57.276823  479667 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 23:56:57.276939  479667 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1222 23:56:57.277025  479667 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1222 23:56:57.277126  479667 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1222 23:56:57.277196  479667 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1222 23:56:57.277276  479667 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1222 23:56:57.277407  479667 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1222 23:56:57.277534  479667 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1222 23:56:57.277666  479667 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1222 23:56:57.277742  479667 kubeadm.go:319] [certs] Using the existing "sa" key
	I1222 23:56:57.277821  479667 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 23:56:57.277889  479667 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 23:56:57.277966  479667 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 23:56:57.278037  479667 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 23:56:57.278120  479667 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 23:56:57.278199  479667 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 23:56:57.278318  479667 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 23:56:57.278434  479667 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 23:56:57.279970  479667 out.go:252]   - Booting up control plane ...
	I1222 23:56:57.280089  479667 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 23:56:57.280218  479667 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 23:56:57.280321  479667 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 23:56:57.280523  479667 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 23:56:57.280687  479667 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 23:56:57.280859  479667 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 23:56:57.281014  479667 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 23:56:57.281084  479667 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 23:56:57.281295  479667 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 23:56:57.281469  479667 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 23:56:57.281571  479667 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001041622s
	I1222 23:56:57.281635  479667 kubeadm.go:319] 
	I1222 23:56:57.281714  479667 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 23:56:57.281762  479667 kubeadm.go:319] 	- The kubelet is not running
	I1222 23:56:57.281902  479667 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 23:56:57.281914  479667 kubeadm.go:319] 
	I1222 23:56:57.282054  479667 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 23:56:57.282099  479667 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 23:56:57.282145  479667 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 23:56:57.282247  479667 kubeadm.go:403] duration metric: took 12m7.390423119s to StartCluster
	I1222 23:56:57.282304  479667 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 23:56:57.282379  479667 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 23:56:57.282503  479667 kubeadm.go:319] 
	I1222 23:56:57.323087  479667 cri.go:96] found id: ""
	I1222 23:56:57.323114  479667 logs.go:282] 0 containers: []
	W1222 23:56:57.323127  479667 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:56:57.323136  479667 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 23:56:57.323199  479667 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 23:56:57.348508  479667 cri.go:96] found id: ""
	I1222 23:56:57.348532  479667 logs.go:282] 0 containers: []
	W1222 23:56:57.348541  479667 logs.go:284] No container was found matching "etcd"
	I1222 23:56:57.348552  479667 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 23:56:57.348617  479667 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 23:56:57.377455  479667 cri.go:96] found id: ""
	I1222 23:56:57.377484  479667 logs.go:282] 0 containers: []
	W1222 23:56:57.377495  479667 logs.go:284] No container was found matching "coredns"
	I1222 23:56:57.377503  479667 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 23:56:57.377568  479667 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 23:56:57.406330  479667 cri.go:96] found id: ""
	I1222 23:56:57.406353  479667 logs.go:282] 0 containers: []
	W1222 23:56:57.406361  479667 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:56:57.406371  479667 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 23:56:57.406431  479667 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 23:56:57.433916  479667 cri.go:96] found id: ""
	I1222 23:56:57.433939  479667 logs.go:282] 0 containers: []
	W1222 23:56:57.433949  479667 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:56:57.433956  479667 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 23:56:57.433999  479667 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 23:56:57.463537  479667 cri.go:96] found id: ""
	I1222 23:56:57.463565  479667 logs.go:282] 0 containers: []
	W1222 23:56:57.463576  479667 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:56:57.463585  479667 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 23:56:57.463662  479667 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 23:56:57.492830  479667 cri.go:96] found id: ""
	I1222 23:56:57.492865  479667 logs.go:282] 0 containers: []
	W1222 23:56:57.492877  479667 logs.go:284] No container was found matching "kindnet"
	I1222 23:56:57.492885  479667 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1222 23:56:57.492947  479667 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 23:56:57.533530  479667 cri.go:96] found id: ""
	I1222 23:56:57.533637  479667 logs.go:282] 0 containers: []
	W1222 23:56:57.533660  479667 logs.go:284] No container was found matching "storage-provisioner"
	I1222 23:56:57.533677  479667 logs.go:123] Gathering logs for kubelet ...
	I1222 23:56:57.533694  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:56:57.593557  479667 logs.go:123] Gathering logs for dmesg ...
	I1222 23:56:57.593586  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:56:57.613005  479667 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:56:57.613033  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:56:57.671159  479667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:56:57.671186  479667 logs.go:123] Gathering logs for Docker ...
	I1222 23:56:57.671198  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:56:57.692998  479667 logs.go:123] Gathering logs for container status ...
	I1222 23:56:57.693027  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 23:56:57.723470  479667 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001041622s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1222 23:56:57.723521  479667 out.go:285] * 
	* 
	W1222 23:56:57.723580  479667 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001041622s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001041622s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 23:56:57.723611  479667 out.go:285] * 
	* 
	W1222 23:56:57.723920  479667 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 23:56:57.727055  479667 out.go:203] 
	W1222 23:56:57.728300  479667 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001041622s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001041622s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 23:56:57.728358  479667 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1222 23:56:57.728392  479667 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1222 23:56:57.729662  479667 out.go:203] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-linux-amd64 start -p kubernetes-upgrade-767823 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker : exit status 109
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-767823 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-767823 version --output=json: exit status 1 (55.782902ms)

                                                
                                                
-- stdout --
	{
	  "clientVersion": {
	    "major": "1",
	    "minor": "35",
	    "gitVersion": "v1.35.0",
	    "gitCommit": "66452049f3d692768c39c797b21b793dce80314e",
	    "gitTreeState": "clean",
	    "buildDate": "2025-12-17T12:41:05Z",
	    "goVersion": "go1.25.5",
	    "compiler": "gc",
	    "platform": "linux/amd64"
	  },
	  "kustomizeVersion": "v5.7.1"
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.76.2:8443 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:615: *** TestKubernetesUpgrade FAILED at 2025-12-22 23:56:58.08124747 +0000 UTC m=+5088.619783049
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestKubernetesUpgrade]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect kubernetes-upgrade-767823
helpers_test.go:244: (dbg) docker inspect kubernetes-upgrade-767823:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "81af7bcf0334f93b528c1839890e2a621fdd50c315661e05e998139ba571e702",
	        "Created": "2025-12-22T23:44:04.400312019Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 479922,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T23:44:37.124546293Z",
	            "FinishedAt": "2025-12-22T23:44:36.384874254Z"
	        },
	        "Image": "sha256:9a87e850a5e640dd3e5f71477885272b970ba271e3722be8bebbe0157f704ffd",
	        "ResolvConfPath": "/var/lib/docker/containers/81af7bcf0334f93b528c1839890e2a621fdd50c315661e05e998139ba571e702/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/81af7bcf0334f93b528c1839890e2a621fdd50c315661e05e998139ba571e702/hostname",
	        "HostsPath": "/var/lib/docker/containers/81af7bcf0334f93b528c1839890e2a621fdd50c315661e05e998139ba571e702/hosts",
	        "LogPath": "/var/lib/docker/containers/81af7bcf0334f93b528c1839890e2a621fdd50c315661e05e998139ba571e702/81af7bcf0334f93b528c1839890e2a621fdd50c315661e05e998139ba571e702-json.log",
	        "Name": "/kubernetes-upgrade-767823",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-767823:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "kubernetes-upgrade-767823",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "81af7bcf0334f93b528c1839890e2a621fdd50c315661e05e998139ba571e702",
	                "LowerDir": "/var/lib/docker/overlay2/3b0a1f557ced093bc9ae469c2caaf7acbfde55f8753e20111c76060554439c79-init/diff:/var/lib/docker/overlay2/c57dd1a41102d99c4ed6be3c60b871435428bd2cea6a3d8d172f0a67527ba009/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3b0a1f557ced093bc9ae469c2caaf7acbfde55f8753e20111c76060554439c79/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3b0a1f557ced093bc9ae469c2caaf7acbfde55f8753e20111c76060554439c79/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3b0a1f557ced093bc9ae469c2caaf7acbfde55f8753e20111c76060554439c79/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-767823",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-767823/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-767823",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-767823",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-767823",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "1fdfc916537519972c10b6410deb9122a420f1b382a993926f8516eb6ee638a0",
	            "SandboxKey": "/var/run/docker/netns/1fdfc9165375",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ]
	            },
	            "Networks": {
	                "kubernetes-upgrade-767823": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5f6692e5184da2d1b317eaedcf5e8bc2c7232f9852907853dabf3c2e8eedaa37",
	                    "EndpointID": "69f371835ebb41a329b829db45a1f8befbd4ff9f453d50883a71ccf6608eb148",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "72:4a:18:b9:da:51",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "kubernetes-upgrade-767823",
	                        "81af7bcf0334"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-767823 -n kubernetes-upgrade-767823
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-767823 -n kubernetes-upgrade-767823: exit status 2 (327.756338ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-767823 logs -n 25
helpers_test.go:261: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                          ARGS                                          │        PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p custom-flannel-003676 sudo iptables-save                                            │ custom-flannel-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:56 UTC │ 22 Dec 25 23:56 UTC │
	│ ssh     │ -p custom-flannel-003676 sudo iptables -t nat -L -n -v                                 │ custom-flannel-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:56 UTC │ 22 Dec 25 23:56 UTC │
	│ ssh     │ -p custom-flannel-003676 sudo cat /run/flannel/subnet.env                              │ custom-flannel-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:56 UTC │ 22 Dec 25 23:56 UTC │
	│ ssh     │ -p custom-flannel-003676 sudo cat /etc/kube-flannel/cni-conf.json                      │ custom-flannel-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:56 UTC │                     │
	│ ssh     │ -p custom-flannel-003676 sudo systemctl status kubelet --all --full --no-pager         │ custom-flannel-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:56 UTC │ 22 Dec 25 23:56 UTC │
	│ ssh     │ -p custom-flannel-003676 sudo systemctl cat kubelet --no-pager                         │ custom-flannel-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:56 UTC │ 22 Dec 25 23:56 UTC │
	│ ssh     │ -p custom-flannel-003676 sudo journalctl -xeu kubelet --all --full --no-pager          │ custom-flannel-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:56 UTC │ 22 Dec 25 23:56 UTC │
	│ ssh     │ -p custom-flannel-003676 sudo cat /etc/kubernetes/kubelet.conf                         │ custom-flannel-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:56 UTC │ 22 Dec 25 23:56 UTC │
	│ ssh     │ -p custom-flannel-003676 sudo cat /var/lib/kubelet/config.yaml                         │ custom-flannel-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:56 UTC │ 22 Dec 25 23:56 UTC │
	│ ssh     │ -p custom-flannel-003676 sudo systemctl status docker --all --full --no-pager          │ custom-flannel-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:56 UTC │ 22 Dec 25 23:56 UTC │
	│ ssh     │ -p custom-flannel-003676 sudo systemctl cat docker --no-pager                          │ custom-flannel-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:56 UTC │ 22 Dec 25 23:56 UTC │
	│ ssh     │ -p custom-flannel-003676 sudo cat /etc/docker/daemon.json                              │ custom-flannel-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:56 UTC │ 22 Dec 25 23:56 UTC │
	│ ssh     │ -p custom-flannel-003676 sudo docker system info                                       │ custom-flannel-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:56 UTC │ 22 Dec 25 23:56 UTC │
	│ ssh     │ -p custom-flannel-003676 sudo systemctl status cri-docker --all --full --no-pager      │ custom-flannel-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:56 UTC │ 22 Dec 25 23:56 UTC │
	│ ssh     │ -p custom-flannel-003676 sudo systemctl cat cri-docker --no-pager                      │ custom-flannel-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:56 UTC │ 22 Dec 25 23:56 UTC │
	│ ssh     │ -p custom-flannel-003676 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ custom-flannel-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:56 UTC │ 22 Dec 25 23:56 UTC │
	│ ssh     │ -p custom-flannel-003676 sudo cat /usr/lib/systemd/system/cri-docker.service           │ custom-flannel-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:56 UTC │ 22 Dec 25 23:56 UTC │
	│ ssh     │ -p custom-flannel-003676 sudo cri-dockerd --version                                    │ custom-flannel-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:56 UTC │ 22 Dec 25 23:56 UTC │
	│ ssh     │ -p custom-flannel-003676 sudo systemctl status containerd --all --full --no-pager      │ custom-flannel-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:56 UTC │ 22 Dec 25 23:56 UTC │
	│ ssh     │ -p custom-flannel-003676 sudo systemctl cat containerd --no-pager                      │ custom-flannel-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:56 UTC │ 22 Dec 25 23:56 UTC │
	│ ssh     │ -p custom-flannel-003676 sudo cat /lib/systemd/system/containerd.service               │ custom-flannel-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:56 UTC │ 22 Dec 25 23:56 UTC │
	│ ssh     │ -p custom-flannel-003676 sudo cat /etc/containerd/config.toml                          │ custom-flannel-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:56 UTC │ 22 Dec 25 23:56 UTC │
	│ ssh     │ -p custom-flannel-003676 sudo containerd config dump                                   │ custom-flannel-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:56 UTC │ 22 Dec 25 23:56 UTC │
	│ ssh     │ -p custom-flannel-003676 sudo systemctl status crio --all --full --no-pager            │ custom-flannel-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:56 UTC │                     │
	│ ssh     │ -p custom-flannel-003676 sudo systemctl cat crio --no-pager                            │ custom-flannel-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:56 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 23:56:06
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 23:56:06.787812  622784 out.go:360] Setting OutFile to fd 1 ...
	I1222 23:56:06.787957  622784 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 23:56:06.787968  622784 out.go:374] Setting ErrFile to fd 2...
	I1222 23:56:06.787975  622784 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 23:56:06.788198  622784 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-72233/.minikube/bin
	I1222 23:56:06.788690  622784 out.go:368] Setting JSON to false
	I1222 23:56:06.789869  622784 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":13107,"bootTime":1766434660,"procs":288,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1222 23:56:06.789927  622784 start.go:143] virtualization: kvm guest
	I1222 23:56:06.791916  622784 out.go:179] * [no-preload-063943] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1222 23:56:06.793066  622784 out.go:179]   - MINIKUBE_LOCATION=22301
	I1222 23:56:06.793070  622784 notify.go:221] Checking for updates...
	I1222 23:56:06.795123  622784 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 23:56:06.796547  622784 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1222 23:56:06.797830  622784 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-72233/.minikube
	I1222 23:56:06.798963  622784 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1222 23:56:06.799929  622784 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 23:56:06.801301  622784 config.go:182] Loaded profile config "no-preload-063943": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1222 23:56:06.801991  622784 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 23:56:06.829291  622784 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1222 23:56:06.829446  622784 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 23:56:06.883810  622784 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:true NGoroutines:75 SystemTime:2025-12-22 23:56:06.874445764 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1222 23:56:06.883920  622784 docker.go:319] overlay module found
	I1222 23:56:06.889706  622784 out.go:179] * Using the docker driver based on existing profile
	I1222 23:56:06.890811  622784 start.go:309] selected driver: docker
	I1222 23:56:06.890825  622784 start.go:928] validating driver "docker" against &{Name:no-preload-063943 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-063943 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 23:56:06.890942  622784 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 23:56:06.891846  622784 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 23:56:06.949722  622784 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:true NGoroutines:75 SystemTime:2025-12-22 23:56:06.939639835 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1222 23:56:06.950105  622784 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1222 23:56:06.950134  622784 cni.go:84] Creating CNI manager for ""
	I1222 23:56:06.950197  622784 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1222 23:56:06.950233  622784 start.go:353] cluster config:
	{Name:no-preload-063943 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-063943 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 23:56:06.952215  622784 out.go:179] * Starting "no-preload-063943" primary control-plane node in "no-preload-063943" cluster
	I1222 23:56:06.953238  622784 cache.go:134] Beginning downloading kic base image for docker with docker
	I1222 23:56:06.954402  622784 out.go:179] * Pulling base image v0.0.48-1766394456-22288 ...
	I1222 23:56:06.955517  622784 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1222 23:56:06.955632  622784 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local docker daemon
	I1222 23:56:06.955700  622784 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/no-preload-063943/config.json ...
	I1222 23:56:06.955876  622784 cache.go:107] acquiring lock: {Name:mka2a7cd00c9ee09fcd67b9fe2b1b7736241aafe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 23:56:06.955888  622784 cache.go:107] acquiring lock: {Name:mk804c5f94e18a50ea710125b603ced35b076f37 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 23:56:06.955908  622784 cache.go:107] acquiring lock: {Name:mk0f5262807fb5404c75ce06ce5720befe312fb6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 23:56:06.955959  622784 cache.go:107] acquiring lock: {Name:mk6f1235a31bde2512aad5a6083026ce14993945 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 23:56:06.956006  622784 cache.go:115] /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1222 23:56:06.955994  622784 cache.go:107] acquiring lock: {Name:mk62dd6bd5d525d245831d36e2e60bb4a4c91eaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 23:56:06.955983  622784 cache.go:107] acquiring lock: {Name:mk7ec73f502e042fe14942dd4168f5178dfa9f1e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 23:56:06.956016  622784 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 155.603µs
	I1222 23:56:06.956034  622784 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1222 23:56:06.956023  622784 cache.go:107] acquiring lock: {Name:mk06ea69d634e565762c96598011d1945c901ed0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 23:56:06.956055  622784 cache.go:115] /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 exists
	I1222 23:56:06.956079  622784 cache.go:115] /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1222 23:56:06.956082  622784 cache.go:115] /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1222 23:56:06.956091  622784 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 234.014µs
	I1222 23:56:06.956094  622784 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 106.055µs
	I1222 23:56:06.956010  622784 cache.go:115] /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1222 23:56:06.956101  622784 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1222 23:56:06.956078  622784 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0" took 120.906µs
	I1222 23:56:06.956110  622784 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1222 23:56:06.956103  622784 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1222 23:56:06.955994  622784 cache.go:107] acquiring lock: {Name:mk02371428913253d2a19c8c9a792727a5cd8caa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 23:56:06.956125  622784 cache.go:115] /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1222 23:56:06.956131  622784 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 204.094µs
	I1222 23:56:06.956141  622784 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1222 23:56:06.956109  622784 cache.go:115] /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1222 23:56:06.956149  622784 cache.go:115] /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1222 23:56:06.956155  622784 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 133.189µs
	I1222 23:56:06.956156  622784 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 165.223µs
	I1222 23:56:06.956166  622784 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1222 23:56:06.956166  622784 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1222 23:56:06.956157  622784 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 213.023µs
	I1222 23:56:06.956199  622784 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1222 23:56:06.956215  622784 cache.go:87] Successfully saved all images to host disk.
	I1222 23:56:06.977510  622784 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local docker daemon, skipping pull
	I1222 23:56:06.977529  622784 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 exists in daemon, skipping load
	I1222 23:56:06.977556  622784 cache.go:243] Successfully downloaded all kic artifacts
	I1222 23:56:06.977588  622784 start.go:360] acquireMachinesLock for no-preload-063943: {Name:mke00101a1e3840a843a95b7b058ed2d434978f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 23:56:06.977668  622784 start.go:364] duration metric: took 39.489µs to acquireMachinesLock for "no-preload-063943"
	I1222 23:56:06.977686  622784 start.go:96] Skipping create...Using existing machine configuration
	I1222 23:56:06.977693  622784 fix.go:54] fixHost starting: 
	I1222 23:56:06.977891  622784 cli_runner.go:164] Run: docker container inspect no-preload-063943 --format={{.State.Status}}
	I1222 23:56:06.996235  622784 fix.go:112] recreateIfNeeded on no-preload-063943: state=Stopped err=<nil>
	W1222 23:56:06.996269  622784 fix.go:138] unexpected machine state, will restart: <nil>
	I1222 23:56:08.352718  617786 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.118417794s
	I1222 23:56:09.027934  617786 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.793674753s
	I1222 23:56:10.736216  617786 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.501998348s
	I1222 23:56:10.751710  617786 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1222 23:56:10.762050  617786 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1222 23:56:10.771977  617786 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1222 23:56:10.772272  617786 kubeadm.go:319] [mark-control-plane] Marking the node custom-flannel-003676 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1222 23:56:10.779868  617786 kubeadm.go:319] [bootstrap-token] Using token: 6d3pfs.kbwr5oybxrz395lr
	I1222 23:56:10.781020  617786 out.go:252]   - Configuring RBAC rules ...
	I1222 23:56:10.781157  617786 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1222 23:56:10.784217  617786 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1222 23:56:10.788983  617786 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1222 23:56:10.791363  617786 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1222 23:56:10.793731  617786 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1222 23:56:10.796612  617786 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1222 23:56:11.142203  617786 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1222 23:56:11.618834  617786 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1222 23:56:06.998227  622784 out.go:252] * Restarting existing docker container for "no-preload-063943" ...
	I1222 23:56:06.998315  622784 cli_runner.go:164] Run: docker start no-preload-063943
	I1222 23:56:07.267708  622784 cli_runner.go:164] Run: docker container inspect no-preload-063943 --format={{.State.Status}}
	I1222 23:56:07.293171  622784 kic.go:430] container "no-preload-063943" state is running.
	I1222 23:56:07.293722  622784 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-063943
	I1222 23:56:07.316033  622784 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/no-preload-063943/config.json ...
	I1222 23:56:07.316314  622784 machine.go:94] provisionDockerMachine start ...
	I1222 23:56:07.316411  622784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-063943
	I1222 23:56:07.341250  622784 main.go:144] libmachine: Using SSH client type: native
	I1222 23:56:07.341631  622784 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1222 23:56:07.341656  622784 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 23:56:07.342836  622784 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36610->127.0.0.1:33138: read: connection reset by peer
	I1222 23:56:10.487762  622784 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-063943
	
	I1222 23:56:10.487795  622784 ubuntu.go:182] provisioning hostname "no-preload-063943"
	I1222 23:56:10.487850  622784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-063943
	I1222 23:56:10.505616  622784 main.go:144] libmachine: Using SSH client type: native
	I1222 23:56:10.505838  622784 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1222 23:56:10.505851  622784 main.go:144] libmachine: About to run SSH command:
	sudo hostname no-preload-063943 && echo "no-preload-063943" | sudo tee /etc/hostname
	I1222 23:56:10.658160  622784 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-063943
	
	I1222 23:56:10.658231  622784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-063943
	I1222 23:56:10.675956  622784 main.go:144] libmachine: Using SSH client type: native
	I1222 23:56:10.676168  622784 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1222 23:56:10.676185  622784 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-063943' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-063943/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-063943' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 23:56:10.821341  622784 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 23:56:10.821371  622784 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22301-72233/.minikube CaCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22301-72233/.minikube}
	I1222 23:56:10.821395  622784 ubuntu.go:190] setting up certificates
	I1222 23:56:10.821408  622784 provision.go:84] configureAuth start
	I1222 23:56:10.821472  622784 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-063943
	I1222 23:56:10.838531  622784 provision.go:143] copyHostCerts
	I1222 23:56:10.838584  622784 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem, removing ...
	I1222 23:56:10.838614  622784 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem
	I1222 23:56:10.838691  622784 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem (1082 bytes)
	I1222 23:56:10.838798  622784 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem, removing ...
	I1222 23:56:10.838806  622784 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem
	I1222 23:56:10.838834  622784 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem (1123 bytes)
	I1222 23:56:10.838905  622784 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem, removing ...
	I1222 23:56:10.838913  622784 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem
	I1222 23:56:10.838938  622784 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem (1679 bytes)
	I1222 23:56:10.839052  622784 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem org=jenkins.no-preload-063943 san=[127.0.0.1 192.168.103.2 localhost minikube no-preload-063943]
	I1222 23:56:10.925520  622784 provision.go:177] copyRemoteCerts
	I1222 23:56:10.925580  622784 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 23:56:10.925637  622784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-063943
	I1222 23:56:10.946266  622784 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/no-preload-063943/id_rsa Username:docker}
	I1222 23:56:11.053523  622784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 23:56:11.071312  622784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 23:56:11.089514  622784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1222 23:56:11.106427  622784 provision.go:87] duration metric: took 285.002519ms to configureAuth
	I1222 23:56:11.106461  622784 ubuntu.go:206] setting minikube options for container-runtime
	I1222 23:56:11.106672  622784 config.go:182] Loaded profile config "no-preload-063943": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1222 23:56:11.106743  622784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-063943
	I1222 23:56:11.124909  622784 main.go:144] libmachine: Using SSH client type: native
	I1222 23:56:11.125186  622784 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1222 23:56:11.125199  622784 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1222 23:56:11.278611  622784 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1222 23:56:11.278644  622784 ubuntu.go:71] root file system type: overlay
	I1222 23:56:11.278779  622784 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1222 23:56:11.278840  622784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-063943
	I1222 23:56:11.300929  622784 main.go:144] libmachine: Using SSH client type: native
	I1222 23:56:11.301153  622784 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1222 23:56:11.301217  622784 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1222 23:56:11.463142  622784 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1222 23:56:11.463226  622784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-063943
	I1222 23:56:11.481470  622784 main.go:144] libmachine: Using SSH client type: native
	I1222 23:56:11.481715  622784 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1222 23:56:11.481734  622784 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1222 23:56:11.639310  622784 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 23:56:11.639342  622784 machine.go:97] duration metric: took 4.323005964s to provisionDockerMachine
	I1222 23:56:11.639356  622784 start.go:293] postStartSetup for "no-preload-063943" (driver="docker")
	I1222 23:56:11.639373  622784 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 23:56:11.639462  622784 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 23:56:11.639518  622784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-063943
	I1222 23:56:11.657756  622784 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/no-preload-063943/id_rsa Username:docker}
	I1222 23:56:11.760924  622784 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 23:56:11.765207  622784 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 23:56:11.765242  622784 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 23:56:11.765256  622784 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-72233/.minikube/addons for local assets ...
	I1222 23:56:11.765321  622784 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-72233/.minikube/files for local assets ...
	I1222 23:56:11.765437  622784 filesync.go:149] local asset: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem -> 758032.pem in /etc/ssl/certs
	I1222 23:56:11.765558  622784 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1222 23:56:11.774504  622784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem --> /etc/ssl/certs/758032.pem (1708 bytes)
	I1222 23:56:12.142316  617786 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1222 23:56:12.143506  617786 kubeadm.go:319] 
	I1222 23:56:12.143617  617786 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1222 23:56:12.143629  617786 kubeadm.go:319] 
	I1222 23:56:12.143730  617786 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1222 23:56:12.143742  617786 kubeadm.go:319] 
	I1222 23:56:12.143799  617786 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1222 23:56:12.143896  617786 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1222 23:56:12.143972  617786 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1222 23:56:12.143982  617786 kubeadm.go:319] 
	I1222 23:56:12.144058  617786 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1222 23:56:12.144067  617786 kubeadm.go:319] 
	I1222 23:56:12.144136  617786 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1222 23:56:12.144145  617786 kubeadm.go:319] 
	I1222 23:56:12.144217  617786 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1222 23:56:12.144319  617786 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1222 23:56:12.144412  617786 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1222 23:56:12.144428  617786 kubeadm.go:319] 
	I1222 23:56:12.144555  617786 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1222 23:56:12.144711  617786 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1222 23:56:12.144727  617786 kubeadm.go:319] 
	I1222 23:56:12.144869  617786 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 6d3pfs.kbwr5oybxrz395lr \
	I1222 23:56:12.145017  617786 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:5443640003ad4b7907b37ec274fbd08911a8cee85cf5083e20215ea2b9d82bbc \
	I1222 23:56:12.145051  617786 kubeadm.go:319] 	--control-plane 
	I1222 23:56:12.145059  617786 kubeadm.go:319] 
	I1222 23:56:12.145166  617786 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1222 23:56:12.145176  617786 kubeadm.go:319] 
	I1222 23:56:12.145305  617786 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 6d3pfs.kbwr5oybxrz395lr \
	I1222 23:56:12.145411  617786 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:5443640003ad4b7907b37ec274fbd08911a8cee85cf5083e20215ea2b9d82bbc 
	I1222 23:56:12.148658  617786 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1222 23:56:12.148961  617786 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1222 23:56:12.149127  617786 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 23:56:12.149159  617786 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1222 23:56:12.150742  617786 out.go:179] * Configuring testdata/kube-flannel.yaml (Container Networking Interface) ...
	I1222 23:56:11.796185  622784 start.go:296] duration metric: took 156.80897ms for postStartSetup
	I1222 23:56:11.796390  622784 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 23:56:11.796444  622784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-063943
	I1222 23:56:11.814486  622784 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/no-preload-063943/id_rsa Username:docker}
	I1222 23:56:11.914168  622784 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 23:56:11.919676  622784 fix.go:56] duration metric: took 4.941972273s for fixHost
	I1222 23:56:11.919706  622784 start.go:83] releasing machines lock for "no-preload-063943", held for 4.942026606s
	I1222 23:56:11.919783  622784 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-063943
	I1222 23:56:11.937547  622784 ssh_runner.go:195] Run: cat /version.json
	I1222 23:56:11.937625  622784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-063943
	I1222 23:56:11.937644  622784 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 23:56:11.937709  622784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-063943
	I1222 23:56:11.958084  622784 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/no-preload-063943/id_rsa Username:docker}
	I1222 23:56:11.958371  622784 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/no-preload-063943/id_rsa Username:docker}
	I1222 23:56:12.112211  622784 ssh_runner.go:195] Run: systemctl --version
	I1222 23:56:12.119070  622784 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 23:56:12.123978  622784 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 23:56:12.124040  622784 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 23:56:12.132370  622784 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1222 23:56:12.132398  622784 start.go:496] detecting cgroup driver to use...
	I1222 23:56:12.132446  622784 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 23:56:12.132557  622784 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 23:56:12.147969  622784 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1222 23:56:12.156989  622784 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1222 23:56:12.166284  622784 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1222 23:56:12.166336  622784 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1222 23:56:12.175243  622784 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 23:56:12.184752  622784 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1222 23:56:12.193320  622784 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 23:56:12.203027  622784 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 23:56:12.211365  622784 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1222 23:56:12.220477  622784 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1222 23:56:12.229435  622784 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1222 23:56:12.239534  622784 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 23:56:12.248781  622784 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 23:56:12.258007  622784 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 23:56:12.349279  622784 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1222 23:56:12.432760  622784 start.go:496] detecting cgroup driver to use...
	I1222 23:56:12.432818  622784 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 23:56:12.432866  622784 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1222 23:56:12.448246  622784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 23:56:12.463118  622784 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1222 23:56:12.487623  622784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 23:56:12.506873  622784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1222 23:56:12.529985  622784 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 23:56:12.552327  622784 ssh_runner.go:195] Run: which cri-dockerd
	I1222 23:56:12.558931  622784 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1222 23:56:12.568605  622784 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1222 23:56:12.581936  622784 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1222 23:56:12.677251  622784 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1222 23:56:12.764171  622784 docker.go:578] configuring docker to use "cgroupfs" as cgroup driver...
	I1222 23:56:12.764297  622784 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1222 23:56:12.778866  622784 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1222 23:56:12.791097  622784 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 23:56:12.873271  622784 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1222 23:56:13.647694  622784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 23:56:13.660319  622784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1222 23:56:13.672142  622784 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1222 23:56:13.686404  622784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1222 23:56:13.699284  622784 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1222 23:56:13.787110  622784 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1222 23:56:13.865157  622784 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 23:56:13.947288  622784 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1222 23:56:13.973228  622784 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1222 23:56:13.986491  622784 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 23:56:14.079336  622784 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1222 23:56:14.149938  622784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1222 23:56:14.163244  622784 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1222 23:56:14.163323  622784 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1222 23:56:14.167360  622784 start.go:564] Will wait 60s for crictl version
	I1222 23:56:14.167406  622784 ssh_runner.go:195] Run: which crictl
	I1222 23:56:14.170969  622784 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 23:56:14.196483  622784 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1222 23:56:14.196544  622784 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1222 23:56:14.225521  622784 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1222 23:56:14.252963  622784 out.go:252] * Preparing Kubernetes v1.35.0-rc.1 on Docker 29.1.3 ...
	I1222 23:56:14.253065  622784 cli_runner.go:164] Run: docker network inspect no-preload-063943 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 23:56:14.270342  622784 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1222 23:56:14.274551  622784 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 23:56:14.285856  622784 kubeadm.go:884] updating cluster {Name:no-preload-063943 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-063943 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1222 23:56:14.285966  622784 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1222 23:56:14.285994  622784 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1222 23:56:14.306257  622784 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1222 23:56:14.306282  622784 cache_images.go:86] Images are preloaded, skipping loading
	I1222 23:56:14.306289  622784 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.35.0-rc.1 docker true true} ...
	I1222 23:56:14.306405  622784 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-063943 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-063943 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 23:56:14.306471  622784 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1222 23:56:14.356990  622784 cni.go:84] Creating CNI manager for ""
	I1222 23:56:14.357021  622784 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1222 23:56:14.357039  622784 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 23:56:14.357066  622784 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-063943 NodeName:no-preload-063943 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 23:56:14.357229  622784 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "no-preload-063943"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 23:56:14.357308  622784 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 23:56:14.365667  622784 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 23:56:14.365732  622784 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 23:56:14.373749  622784 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1222 23:56:14.387395  622784 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 23:56:14.400068  622784 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2226 bytes)
	I1222 23:56:14.412487  622784 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1222 23:56:14.416170  622784 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 23:56:14.426114  622784 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 23:56:14.509743  622784 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 23:56:14.540113  622784 certs.go:69] Setting up /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/no-preload-063943 for IP: 192.168.103.2
	I1222 23:56:14.540133  622784 certs.go:195] generating shared ca certs ...
	I1222 23:56:14.540148  622784 certs.go:227] acquiring lock for ca certs: {Name:mk952cc8302daab7c0050aedd5db4002f6808128 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 23:56:14.540299  622784 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.key
	I1222 23:56:14.540358  622784 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.key
	I1222 23:56:14.540369  622784 certs.go:257] generating profile certs ...
	I1222 23:56:14.540483  622784 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/no-preload-063943/client.key
	I1222 23:56:14.540545  622784 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/no-preload-063943/apiserver.key.9af7d729
	I1222 23:56:14.540617  622784 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/no-preload-063943/proxy-client.key
	I1222 23:56:14.540787  622784 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803.pem (1338 bytes)
	W1222 23:56:14.540833  622784 certs.go:480] ignoring /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803_empty.pem, impossibly tiny 0 bytes
	I1222 23:56:14.540848  622784 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem (1675 bytes)
	I1222 23:56:14.540883  622784 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem (1082 bytes)
	I1222 23:56:14.540913  622784 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem (1123 bytes)
	I1222 23:56:14.540961  622784 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem (1679 bytes)
	I1222 23:56:14.541019  622784 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem (1708 bytes)
	I1222 23:56:14.541682  622784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 23:56:14.561930  622784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 23:56:14.582882  622784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 23:56:14.602213  622784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 23:56:14.620069  622784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/no-preload-063943/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 23:56:14.637034  622784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/no-preload-063943/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 23:56:14.655477  622784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/no-preload-063943/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 23:56:14.673395  622784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/no-preload-063943/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1222 23:56:14.692559  622784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem --> /usr/share/ca-certificates/758032.pem (1708 bytes)
	I1222 23:56:14.711580  622784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 23:56:14.730313  622784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803.pem --> /usr/share/ca-certificates/75803.pem (1338 bytes)
	I1222 23:56:14.748699  622784 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 23:56:14.763148  622784 ssh_runner.go:195] Run: openssl version
	I1222 23:56:14.770288  622784 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/758032.pem
	I1222 23:56:14.779278  622784 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/758032.pem /etc/ssl/certs/758032.pem
	I1222 23:56:14.788538  622784 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/758032.pem
	I1222 23:56:14.793505  622784 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 22:42 /usr/share/ca-certificates/758032.pem
	I1222 23:56:14.793569  622784 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/758032.pem
	I1222 23:56:14.829348  622784 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 23:56:14.837844  622784 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 23:56:14.845566  622784 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 23:56:14.853331  622784 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 23:56:14.857200  622784 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 22:33 /usr/share/ca-certificates/minikubeCA.pem
	I1222 23:56:14.857245  622784 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 23:56:14.892797  622784 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 23:56:14.900664  622784 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/75803.pem
	I1222 23:56:14.908787  622784 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/75803.pem /etc/ssl/certs/75803.pem
	I1222 23:56:14.916129  622784 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75803.pem
	I1222 23:56:14.919627  622784 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 22:42 /usr/share/ca-certificates/75803.pem
	I1222 23:56:14.919685  622784 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75803.pem
	I1222 23:56:14.954538  622784 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 23:56:14.962583  622784 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 23:56:14.966525  622784 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1222 23:56:15.002367  622784 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1222 23:56:15.038250  622784 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1222 23:56:15.072963  622784 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1222 23:56:15.109074  622784 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1222 23:56:15.144870  622784 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1222 23:56:15.181063  622784 kubeadm.go:401] StartCluster: {Name:no-preload-063943 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-063943 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 23:56:15.181226  622784 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1222 23:56:15.202967  622784 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 23:56:15.212505  622784 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1222 23:56:15.212526  622784 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1222 23:56:15.212622  622784 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1222 23:56:15.221424  622784 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1222 23:56:15.221982  622784 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-063943" does not appear in /home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1222 23:56:15.222165  622784 kubeconfig.go:62] /home/jenkins/minikube-integration/22301-72233/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-063943" cluster setting kubeconfig missing "no-preload-063943" context setting]
	I1222 23:56:15.222734  622784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/kubeconfig: {Name:mkabb5ea92c3fe748f610038efb5c58128364c71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 23:56:15.224130  622784 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1222 23:56:15.232074  622784 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1222 23:56:15.232102  622784 kubeadm.go:602] duration metric: took 19.558ms to restartPrimaryControlPlane
	I1222 23:56:15.232112  622784 kubeadm.go:403] duration metric: took 51.063653ms to StartCluster
	I1222 23:56:15.232129  622784 settings.go:142] acquiring lock: {Name:mk05aa406dacdbba79fec0b7e7f355491ea46bf8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 23:56:15.232190  622784 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1222 23:56:15.233084  622784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/kubeconfig: {Name:mkabb5ea92c3fe748f610038efb5c58128364c71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 23:56:15.233330  622784 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1222 23:56:15.233429  622784 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1222 23:56:15.233510  622784 config.go:182] Loaded profile config "no-preload-063943": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1222 23:56:15.233543  622784 addons.go:70] Setting storage-provisioner=true in profile "no-preload-063943"
	I1222 23:56:15.233563  622784 addons.go:70] Setting default-storageclass=true in profile "no-preload-063943"
	I1222 23:56:15.233566  622784 addons.go:239] Setting addon storage-provisioner=true in "no-preload-063943"
	I1222 23:56:15.233566  622784 addons.go:70] Setting dashboard=true in profile "no-preload-063943"
	I1222 23:56:15.233576  622784 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-063943"
	I1222 23:56:15.233585  622784 addons.go:239] Setting addon dashboard=true in "no-preload-063943"
	W1222 23:56:15.233647  622784 addons.go:248] addon dashboard should already be in state true
	I1222 23:56:15.233687  622784 host.go:66] Checking if "no-preload-063943" exists ...
	I1222 23:56:15.233626  622784 host.go:66] Checking if "no-preload-063943" exists ...
	I1222 23:56:15.233903  622784 cli_runner.go:164] Run: docker container inspect no-preload-063943 --format={{.State.Status}}
	I1222 23:56:15.234253  622784 cli_runner.go:164] Run: docker container inspect no-preload-063943 --format={{.State.Status}}
	I1222 23:56:15.234253  622784 cli_runner.go:164] Run: docker container inspect no-preload-063943 --format={{.State.Status}}
	I1222 23:56:15.235759  622784 out.go:179] * Verifying Kubernetes components...
	I1222 23:56:15.236782  622784 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 23:56:15.256841  622784 addons.go:239] Setting addon default-storageclass=true in "no-preload-063943"
	I1222 23:56:15.256892  622784 host.go:66] Checking if "no-preload-063943" exists ...
	I1222 23:56:15.257253  622784 cli_runner.go:164] Run: docker container inspect no-preload-063943 --format={{.State.Status}}
	I1222 23:56:15.257984  622784 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1222 23:56:15.258712  622784 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 23:56:15.261263  622784 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1222 23:56:12.155006  617786 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1222 23:56:12.155059  617786 ssh_runner.go:195] Run: stat -c "%s %y" /var/tmp/minikube/cni.yaml
	I1222 23:56:12.159376  617786 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%s %y" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/tmp/minikube/cni.yaml': No such file or directory
	I1222 23:56:12.159405  617786 ssh_runner.go:362] scp testdata/kube-flannel.yaml --> /var/tmp/minikube/cni.yaml (4578 bytes)
	I1222 23:56:12.177533  617786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1222 23:56:12.495703  617786 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1222 23:56:12.495797  617786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 23:56:12.495826  617786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes custom-flannel-003676 minikube.k8s.io/updated_at=2025_12_22T23_56_12_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=97c570e2878f345404332f23c78ab0f60732b01b minikube.k8s.io/name=custom-flannel-003676 minikube.k8s.io/primary=true
	I1222 23:56:12.521059  617786 ops.go:34] apiserver oom_adj: -16
	I1222 23:56:12.654933  617786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 23:56:13.155867  617786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 23:56:13.655117  617786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 23:56:14.155809  617786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 23:56:14.655152  617786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 23:56:15.155261  617786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 23:56:15.655794  617786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 23:56:16.155793  617786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 23:56:15.261324  622784 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 23:56:15.261338  622784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1222 23:56:15.261392  622784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-063943
	I1222 23:56:15.264710  622784 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1222 23:56:15.264734  622784 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1222 23:56:15.264793  622784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-063943
	I1222 23:56:15.289134  622784 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1222 23:56:15.289157  622784 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1222 23:56:15.289218  622784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-063943
	I1222 23:56:15.291102  622784 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/no-preload-063943/id_rsa Username:docker}
	I1222 23:56:15.298284  622784 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/no-preload-063943/id_rsa Username:docker}
	I1222 23:56:15.308959  622784 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/no-preload-063943/id_rsa Username:docker}
	I1222 23:56:15.382953  622784 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 23:56:15.435098  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 23:56:15.446519  622784 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1222 23:56:15.446552  622784 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1222 23:56:15.447890  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1222 23:56:15.461566  622784 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1222 23:56:15.461602  622784 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1222 23:56:15.475470  622784 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1222 23:56:15.475494  622784 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1222 23:56:15.537632  622784 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1222 23:56:15.537677  622784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1222 23:56:15.553483  622784 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1222 23:56:15.553507  622784 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1222 23:56:15.566207  622784 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1222 23:56:15.566236  622784 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1222 23:56:15.579179  622784 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1222 23:56:15.579199  622784 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1222 23:56:15.591978  622784 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1222 23:56:15.591998  622784 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1222 23:56:15.604482  622784 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 23:56:15.604506  622784 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1222 23:56:15.617227  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 23:56:15.996825  622784 node_ready.go:35] waiting up to 6m0s for node "no-preload-063943" to be "Ready" ...
	W1222 23:56:15.996928  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 23:56:15.996977  622784 retry.go:84] will retry after 100ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 23:56:15.997065  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 23:56:15.997221  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 23:56:16.138509  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 23:56:16.186213  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 23:56:16.195112  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 23:56:16.242963  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 23:56:16.321173  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 23:56:16.375911  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 23:56:16.552585  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 23:56:16.604411  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 23:56:16.607350  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 23:56:16.613504  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 23:56:16.667791  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 23:56:16.675579  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 23:56:16.655098  617786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 23:56:17.155412  617786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1222 23:56:17.242441  617786 kubeadm.go:1114] duration metric: took 4.746703061s to wait for elevateKubeSystemPrivileges
	I1222 23:56:17.242477  617786 kubeadm.go:403] duration metric: took 17.169053697s to StartCluster
	I1222 23:56:17.242500  617786 settings.go:142] acquiring lock: {Name:mk05aa406dacdbba79fec0b7e7f355491ea46bf8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 23:56:17.242582  617786 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1222 23:56:17.243695  617786 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/kubeconfig: {Name:mkabb5ea92c3fe748f610038efb5c58128364c71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 23:56:17.243967  617786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1222 23:56:17.243969  617786 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1222 23:56:17.244030  617786 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1222 23:56:17.244138  617786 addons.go:70] Setting storage-provisioner=true in profile "custom-flannel-003676"
	I1222 23:56:17.244171  617786 addons.go:239] Setting addon storage-provisioner=true in "custom-flannel-003676"
	I1222 23:56:17.244173  617786 config.go:182] Loaded profile config "custom-flannel-003676": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1222 23:56:17.244179  617786 addons.go:70] Setting default-storageclass=true in profile "custom-flannel-003676"
	I1222 23:56:17.244204  617786 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "custom-flannel-003676"
	I1222 23:56:17.244208  617786 host.go:66] Checking if "custom-flannel-003676" exists ...
	I1222 23:56:17.244573  617786 cli_runner.go:164] Run: docker container inspect custom-flannel-003676 --format={{.State.Status}}
	I1222 23:56:17.244685  617786 cli_runner.go:164] Run: docker container inspect custom-flannel-003676 --format={{.State.Status}}
	I1222 23:56:17.245193  617786 out.go:179] * Verifying Kubernetes components...
	I1222 23:56:17.246725  617786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 23:56:17.268416  617786 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 23:56:17.268640  617786 addons.go:239] Setting addon default-storageclass=true in "custom-flannel-003676"
	I1222 23:56:17.268682  617786 host.go:66] Checking if "custom-flannel-003676" exists ...
	I1222 23:56:17.269191  617786 cli_runner.go:164] Run: docker container inspect custom-flannel-003676 --format={{.State.Status}}
	I1222 23:56:17.270038  617786 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 23:56:17.270060  617786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1222 23:56:17.270131  617786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-003676
	I1222 23:56:17.294268  617786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/custom-flannel-003676/id_rsa Username:docker}
	I1222 23:56:17.295289  617786 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1222 23:56:17.295309  617786 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1222 23:56:17.295355  617786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-003676
	I1222 23:56:17.314323  617786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/custom-flannel-003676/id_rsa Username:docker}
	I1222 23:56:17.433659  617786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1222 23:56:17.534519  617786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 23:56:17.534954  617786 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 23:56:17.616080  617786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1222 23:56:18.052417  617786 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1222 23:56:18.053451  617786 node_ready.go:35] waiting up to 15m0s for node "custom-flannel-003676" to be "Ready" ...
	I1222 23:56:18.440875  617786 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1222 23:56:18.441706  617786 addons.go:530] duration metric: took 1.197677041s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1222 23:56:18.556298  617786 kapi.go:214] "coredns" deployment in "kube-system" namespace and "custom-flannel-003676" context rescaled to 1 replicas
	W1222 23:56:20.057459  617786 node_ready.go:57] node "custom-flannel-003676" has "Ready":"False" status (will retry)
	I1222 23:56:17.119785  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 23:56:17.136415  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 23:56:17.192096  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 23:56:17.204255  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 23:56:17.272474  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 23:56:17.345915  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 23:56:17.799737  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 23:56:17.863818  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 23:56:17.997440  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1222 23:56:18.263178  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 23:56:18.321207  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 23:56:18.327549  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 23:56:18.389710  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 23:56:19.690562  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 23:56:19.743520  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 23:56:19.794721  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 23:56:19.847994  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 23:56:19.997559  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1222 23:56:20.181760  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 23:56:20.253051  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 23:56:20.763117  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 23:56:20.823326  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 23:56:21.723446  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 23:56:21.777119  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 23:56:22.556543  617786 node_ready.go:57] node "custom-flannel-003676" has "Ready":"False" status (will retry)
	W1222 23:56:24.557344  617786 node_ready.go:57] node "custom-flannel-003676" has "Ready":"False" status (will retry)
	W1222 23:56:21.997768  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1222 23:56:22.221157  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 23:56:22.278106  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 23:56:23.764343  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 23:56:23.827213  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 23:56:23.997943  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1222 23:56:24.175261  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 23:56:24.241378  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 23:56:24.725375  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 23:56:24.784907  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 23:56:26.497519  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:56:27.056768  617786 node_ready.go:57] node "custom-flannel-003676" has "Ready":"False" status (will retry)
	I1222 23:56:29.056486  617786 node_ready.go:49] node "custom-flannel-003676" is "Ready"
	I1222 23:56:29.056517  617786 node_ready.go:38] duration metric: took 11.003040424s for node "custom-flannel-003676" to be "Ready" ...
	I1222 23:56:29.056538  617786 api_server.go:52] waiting for apiserver process to appear ...
	I1222 23:56:29.056636  617786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:56:29.070083  617786 api_server.go:72] duration metric: took 11.826085023s to wait for apiserver process to appear ...
	I1222 23:56:29.070106  617786 api_server.go:88] waiting for apiserver healthz status ...
	I1222 23:56:29.070125  617786 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1222 23:56:29.074220  617786 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1222 23:56:29.075328  617786 api_server.go:141] control plane version: v1.34.3
	I1222 23:56:29.075353  617786 api_server.go:131] duration metric: took 5.239983ms to wait for apiserver health ...
	I1222 23:56:29.075362  617786 system_pods.go:43] waiting for kube-system pods to appear ...
	I1222 23:56:29.078435  617786 system_pods.go:59] 7 kube-system pods found
	I1222 23:56:29.078469  617786 system_pods.go:61] "coredns-66bc5c9577-zpfx2" [9c3aaf1d-9059-42a8-922e-6a896a81b377] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1222 23:56:29.078476  617786 system_pods.go:61] "etcd-custom-flannel-003676" [783ddf31-eafa-4c94-924f-bf3e44787709] Running
	I1222 23:56:29.078482  617786 system_pods.go:61] "kube-apiserver-custom-flannel-003676" [5ba47022-0195-4e7b-b7f5-323c2402b65a] Running
	I1222 23:56:29.078486  617786 system_pods.go:61] "kube-controller-manager-custom-flannel-003676" [2562fbe8-bb07-4050-a0e8-1b44e32bd433] Running
	I1222 23:56:29.078489  617786 system_pods.go:61] "kube-proxy-cc8hf" [6b1bae56-39be-4349-880e-472b712324db] Running
	I1222 23:56:29.078492  617786 system_pods.go:61] "kube-scheduler-custom-flannel-003676" [4a3a3ca3-0e38-4848-90af-dfd27e69c709] Running
	I1222 23:56:29.078496  617786 system_pods.go:61] "storage-provisioner" [c3e08736-9fc5-467a-8d48-2fbe33d6a22f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1222 23:56:29.078502  617786 system_pods.go:74] duration metric: took 3.134277ms to wait for pod list to return data ...
	I1222 23:56:29.078512  617786 default_sa.go:34] waiting for default service account to be created ...
	I1222 23:56:29.080673  617786 default_sa.go:45] found service account: "default"
	I1222 23:56:29.080690  617786 default_sa.go:55] duration metric: took 2.173144ms for default service account to be created ...
	I1222 23:56:29.080698  617786 system_pods.go:116] waiting for k8s-apps to be running ...
	I1222 23:56:29.083584  617786 system_pods.go:86] 7 kube-system pods found
	I1222 23:56:29.083657  617786 system_pods.go:89] "coredns-66bc5c9577-zpfx2" [9c3aaf1d-9059-42a8-922e-6a896a81b377] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1222 23:56:29.083666  617786 system_pods.go:89] "etcd-custom-flannel-003676" [783ddf31-eafa-4c94-924f-bf3e44787709] Running
	I1222 23:56:29.083676  617786 system_pods.go:89] "kube-apiserver-custom-flannel-003676" [5ba47022-0195-4e7b-b7f5-323c2402b65a] Running
	I1222 23:56:29.083690  617786 system_pods.go:89] "kube-controller-manager-custom-flannel-003676" [2562fbe8-bb07-4050-a0e8-1b44e32bd433] Running
	I1222 23:56:29.083697  617786 system_pods.go:89] "kube-proxy-cc8hf" [6b1bae56-39be-4349-880e-472b712324db] Running
	I1222 23:56:29.083704  617786 system_pods.go:89] "kube-scheduler-custom-flannel-003676" [4a3a3ca3-0e38-4848-90af-dfd27e69c709] Running
	I1222 23:56:29.083713  617786 system_pods.go:89] "storage-provisioner" [c3e08736-9fc5-467a-8d48-2fbe33d6a22f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1222 23:56:29.083746  617786 retry.go:84] will retry after 300ms: missing components: kube-dns
	I1222 23:56:29.364082  617786 system_pods.go:86] 7 kube-system pods found
	I1222 23:56:29.364124  617786 system_pods.go:89] "coredns-66bc5c9577-zpfx2" [9c3aaf1d-9059-42a8-922e-6a896a81b377] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1222 23:56:29.364133  617786 system_pods.go:89] "etcd-custom-flannel-003676" [783ddf31-eafa-4c94-924f-bf3e44787709] Running
	I1222 23:56:29.364140  617786 system_pods.go:89] "kube-apiserver-custom-flannel-003676" [5ba47022-0195-4e7b-b7f5-323c2402b65a] Running
	I1222 23:56:29.364147  617786 system_pods.go:89] "kube-controller-manager-custom-flannel-003676" [2562fbe8-bb07-4050-a0e8-1b44e32bd433] Running
	I1222 23:56:29.364152  617786 system_pods.go:89] "kube-proxy-cc8hf" [6b1bae56-39be-4349-880e-472b712324db] Running
	I1222 23:56:29.364158  617786 system_pods.go:89] "kube-scheduler-custom-flannel-003676" [4a3a3ca3-0e38-4848-90af-dfd27e69c709] Running
	I1222 23:56:29.364166  617786 system_pods.go:89] "storage-provisioner" [c3e08736-9fc5-467a-8d48-2fbe33d6a22f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1222 23:56:29.736547  617786 system_pods.go:86] 7 kube-system pods found
	I1222 23:56:29.736623  617786 system_pods.go:89] "coredns-66bc5c9577-zpfx2" [9c3aaf1d-9059-42a8-922e-6a896a81b377] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1222 23:56:29.736636  617786 system_pods.go:89] "etcd-custom-flannel-003676" [783ddf31-eafa-4c94-924f-bf3e44787709] Running
	I1222 23:56:29.736643  617786 system_pods.go:89] "kube-apiserver-custom-flannel-003676" [5ba47022-0195-4e7b-b7f5-323c2402b65a] Running
	I1222 23:56:29.736648  617786 system_pods.go:89] "kube-controller-manager-custom-flannel-003676" [2562fbe8-bb07-4050-a0e8-1b44e32bd433] Running
	I1222 23:56:29.736655  617786 system_pods.go:89] "kube-proxy-cc8hf" [6b1bae56-39be-4349-880e-472b712324db] Running
	I1222 23:56:29.736660  617786 system_pods.go:89] "kube-scheduler-custom-flannel-003676" [4a3a3ca3-0e38-4848-90af-dfd27e69c709] Running
	I1222 23:56:29.736668  617786 system_pods.go:89] "storage-provisioner" [c3e08736-9fc5-467a-8d48-2fbe33d6a22f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1222 23:56:30.143278  617786 system_pods.go:86] 7 kube-system pods found
	I1222 23:56:30.143311  617786 system_pods.go:89] "coredns-66bc5c9577-zpfx2" [9c3aaf1d-9059-42a8-922e-6a896a81b377] Running
	I1222 23:56:30.143317  617786 system_pods.go:89] "etcd-custom-flannel-003676" [783ddf31-eafa-4c94-924f-bf3e44787709] Running
	I1222 23:56:30.143321  617786 system_pods.go:89] "kube-apiserver-custom-flannel-003676" [5ba47022-0195-4e7b-b7f5-323c2402b65a] Running
	I1222 23:56:30.143325  617786 system_pods.go:89] "kube-controller-manager-custom-flannel-003676" [2562fbe8-bb07-4050-a0e8-1b44e32bd433] Running
	I1222 23:56:30.143328  617786 system_pods.go:89] "kube-proxy-cc8hf" [6b1bae56-39be-4349-880e-472b712324db] Running
	I1222 23:56:30.143331  617786 system_pods.go:89] "kube-scheduler-custom-flannel-003676" [4a3a3ca3-0e38-4848-90af-dfd27e69c709] Running
	I1222 23:56:30.143335  617786 system_pods.go:89] "storage-provisioner" [c3e08736-9fc5-467a-8d48-2fbe33d6a22f] Running
	I1222 23:56:30.143342  617786 system_pods.go:126] duration metric: took 1.062638758s to wait for k8s-apps to be running ...
	I1222 23:56:30.143352  617786 system_svc.go:44] waiting for kubelet service to be running ....
	I1222 23:56:30.143422  617786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 23:56:30.156746  617786 system_svc.go:56] duration metric: took 13.384073ms WaitForService to wait for kubelet
	I1222 23:56:30.156783  617786 kubeadm.go:587] duration metric: took 12.912784562s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1222 23:56:30.156805  617786 node_conditions.go:102] verifying NodePressure condition ...
	I1222 23:56:30.159721  617786 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1222 23:56:30.159753  617786 node_conditions.go:123] node cpu capacity is 8
	I1222 23:56:30.159773  617786 node_conditions.go:105] duration metric: took 2.962628ms to run NodePressure ...
	I1222 23:56:30.159788  617786 start.go:242] waiting for startup goroutines ...
	I1222 23:56:30.159797  617786 start.go:247] waiting for cluster config update ...
	I1222 23:56:30.159813  617786 start.go:256] writing updated cluster config ...
	I1222 23:56:30.160106  617786 ssh_runner.go:195] Run: rm -f paused
	I1222 23:56:30.163858  617786 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1222 23:56:30.167273  617786 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zpfx2" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 23:56:30.171204  617786 pod_ready.go:94] pod "coredns-66bc5c9577-zpfx2" is "Ready"
	I1222 23:56:30.171227  617786 pod_ready.go:86] duration metric: took 3.935546ms for pod "coredns-66bc5c9577-zpfx2" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 23:56:30.173199  617786 pod_ready.go:83] waiting for pod "etcd-custom-flannel-003676" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 23:56:30.176997  617786 pod_ready.go:94] pod "etcd-custom-flannel-003676" is "Ready"
	I1222 23:56:30.177017  617786 pod_ready.go:86] duration metric: took 3.79246ms for pod "etcd-custom-flannel-003676" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 23:56:30.178850  617786 pod_ready.go:83] waiting for pod "kube-apiserver-custom-flannel-003676" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 23:56:30.182523  617786 pod_ready.go:94] pod "kube-apiserver-custom-flannel-003676" is "Ready"
	I1222 23:56:30.182553  617786 pod_ready.go:86] duration metric: took 3.680863ms for pod "kube-apiserver-custom-flannel-003676" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 23:56:30.184257  617786 pod_ready.go:83] waiting for pod "kube-controller-manager-custom-flannel-003676" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 23:56:30.568199  617786 pod_ready.go:94] pod "kube-controller-manager-custom-flannel-003676" is "Ready"
	I1222 23:56:30.568223  617786 pod_ready.go:86] duration metric: took 383.948281ms for pod "kube-controller-manager-custom-flannel-003676" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 23:56:30.768788  617786 pod_ready.go:83] waiting for pod "kube-proxy-cc8hf" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 23:56:31.168695  617786 pod_ready.go:94] pod "kube-proxy-cc8hf" is "Ready"
	I1222 23:56:31.168724  617786 pod_ready.go:86] duration metric: took 399.909448ms for pod "kube-proxy-cc8hf" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 23:56:31.368000  617786 pod_ready.go:83] waiting for pod "kube-scheduler-custom-flannel-003676" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 23:56:27.225966  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 23:56:27.282345  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 23:56:27.476666  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 23:56:27.545301  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 23:56:28.498053  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1222 23:56:28.518191  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 23:56:28.573974  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 23:56:30.498143  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1222 23:56:30.980635  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 23:56:31.040465  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 23:56:31.768777  617786 pod_ready.go:94] pod "kube-scheduler-custom-flannel-003676" is "Ready"
	I1222 23:56:31.768813  617786 pod_ready.go:86] duration metric: took 400.786992ms for pod "kube-scheduler-custom-flannel-003676" in "kube-system" namespace to be "Ready" or be gone ...
	I1222 23:56:31.768829  617786 pod_ready.go:40] duration metric: took 1.604943407s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1222 23:56:31.815801  617786 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1222 23:56:31.817344  617786 out.go:179] * Done! kubectl is now configured to use "custom-flannel-003676" cluster and "default" namespace by default
	W1222 23:56:32.498413  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1222 23:56:34.868644  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 23:56:34.927187  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 23:56:34.927227  622784 retry.go:84] will retry after 7.7s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 23:56:34.997930  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1222 23:56:36.186448  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 23:56:36.239960  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 23:56:36.998264  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1222 23:56:39.156321  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 23:56:39.212416  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 23:56:39.498118  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:56:41.997460  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1222 23:56:42.633551  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 23:56:42.686061  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 23:56:42.686101  622784 retry.go:84] will retry after 21.6s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 23:56:43.997753  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:56:46.497541  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1222 23:56:47.074585  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 23:56:47.134741  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 23:56:47.134782  622784 retry.go:84] will retry after 10.8s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 23:56:48.498232  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1222 23:56:50.559326  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 23:56:50.622142  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 23:56:50.622190  622784 retry.go:84] will retry after 31.3s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 23:56:50.998060  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:56:52.998159  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:56:55.497429  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1222 23:56:57.270461  479667 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1222 23:56:57.270514  479667 kubeadm.go:319] 
	I1222 23:56:57.270645  479667 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1222 23:56:57.273257  479667 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 23:56:57.273339  479667 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 23:56:57.273488  479667 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 23:56:57.273575  479667 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1222 23:56:57.273634  479667 kubeadm.go:319] OS: Linux
	I1222 23:56:57.273696  479667 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 23:56:57.273758  479667 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 23:56:57.273818  479667 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 23:56:57.273878  479667 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 23:56:57.273944  479667 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 23:56:57.274185  479667 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 23:56:57.274254  479667 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 23:56:57.274353  479667 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 23:56:57.274433  479667 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 23:56:57.274534  479667 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 23:56:57.274730  479667 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 23:56:57.274849  479667 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 23:56:57.274921  479667 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 23:56:57.276636  479667 out.go:252]   - Generating certificates and keys ...
	I1222 23:56:57.276741  479667 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 23:56:57.276823  479667 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 23:56:57.276939  479667 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1222 23:56:57.277025  479667 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1222 23:56:57.277126  479667 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1222 23:56:57.277196  479667 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1222 23:56:57.277276  479667 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1222 23:56:57.277407  479667 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1222 23:56:57.277534  479667 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1222 23:56:57.277666  479667 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1222 23:56:57.277742  479667 kubeadm.go:319] [certs] Using the existing "sa" key
	I1222 23:56:57.277821  479667 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 23:56:57.277889  479667 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 23:56:57.277966  479667 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 23:56:57.278037  479667 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 23:56:57.278120  479667 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 23:56:57.278199  479667 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 23:56:57.278318  479667 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 23:56:57.278434  479667 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 23:56:57.279970  479667 out.go:252]   - Booting up control plane ...
	I1222 23:56:57.280089  479667 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 23:56:57.280218  479667 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 23:56:57.280321  479667 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 23:56:57.280523  479667 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 23:56:57.280687  479667 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 23:56:57.280859  479667 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 23:56:57.281014  479667 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 23:56:57.281084  479667 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 23:56:57.281295  479667 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 23:56:57.281469  479667 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 23:56:57.281571  479667 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001041622s
	I1222 23:56:57.281635  479667 kubeadm.go:319] 
	I1222 23:56:57.281714  479667 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 23:56:57.281762  479667 kubeadm.go:319] 	- The kubelet is not running
	I1222 23:56:57.281902  479667 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 23:56:57.281914  479667 kubeadm.go:319] 
	I1222 23:56:57.282054  479667 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 23:56:57.282099  479667 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 23:56:57.282145  479667 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 23:56:57.282247  479667 kubeadm.go:403] duration metric: took 12m7.390423119s to StartCluster
	I1222 23:56:57.282304  479667 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 23:56:57.282379  479667 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 23:56:57.282503  479667 kubeadm.go:319] 
	I1222 23:56:57.323087  479667 cri.go:96] found id: ""
	I1222 23:56:57.323114  479667 logs.go:282] 0 containers: []
	W1222 23:56:57.323127  479667 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:56:57.323136  479667 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 23:56:57.323199  479667 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 23:56:57.348508  479667 cri.go:96] found id: ""
	I1222 23:56:57.348532  479667 logs.go:282] 0 containers: []
	W1222 23:56:57.348541  479667 logs.go:284] No container was found matching "etcd"
	I1222 23:56:57.348552  479667 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 23:56:57.348617  479667 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 23:56:57.377455  479667 cri.go:96] found id: ""
	I1222 23:56:57.377484  479667 logs.go:282] 0 containers: []
	W1222 23:56:57.377495  479667 logs.go:284] No container was found matching "coredns"
	I1222 23:56:57.377503  479667 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 23:56:57.377568  479667 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 23:56:57.406330  479667 cri.go:96] found id: ""
	I1222 23:56:57.406353  479667 logs.go:282] 0 containers: []
	W1222 23:56:57.406361  479667 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:56:57.406371  479667 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 23:56:57.406431  479667 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 23:56:57.433916  479667 cri.go:96] found id: ""
	I1222 23:56:57.433939  479667 logs.go:282] 0 containers: []
	W1222 23:56:57.433949  479667 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:56:57.433956  479667 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 23:56:57.433999  479667 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 23:56:57.463537  479667 cri.go:96] found id: ""
	I1222 23:56:57.463565  479667 logs.go:282] 0 containers: []
	W1222 23:56:57.463576  479667 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:56:57.463585  479667 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 23:56:57.463662  479667 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 23:56:57.492830  479667 cri.go:96] found id: ""
	I1222 23:56:57.492865  479667 logs.go:282] 0 containers: []
	W1222 23:56:57.492877  479667 logs.go:284] No container was found matching "kindnet"
	I1222 23:56:57.492885  479667 cri.go:61] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1222 23:56:57.492947  479667 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=storage-provisioner
	I1222 23:56:57.533530  479667 cri.go:96] found id: ""
	I1222 23:56:57.533637  479667 logs.go:282] 0 containers: []
	W1222 23:56:57.533660  479667 logs.go:284] No container was found matching "storage-provisioner"
	I1222 23:56:57.533677  479667 logs.go:123] Gathering logs for kubelet ...
	I1222 23:56:57.533694  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:56:57.593557  479667 logs.go:123] Gathering logs for dmesg ...
	I1222 23:56:57.593586  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:56:57.613005  479667 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:56:57.613033  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:56:57.671159  479667 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:56:57.671186  479667 logs.go:123] Gathering logs for Docker ...
	I1222 23:56:57.671198  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:56:57.692998  479667 logs.go:123] Gathering logs for container status ...
	I1222 23:56:57.693027  479667 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 23:56:57.723470  479667 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001041622s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1222 23:56:57.723521  479667 out.go:285] * 
	W1222 23:56:57.723580  479667 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001041622s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 23:56:57.723611  479667 out.go:285] * 
	W1222 23:56:57.723920  479667 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 23:56:57.727055  479667 out.go:203] 
	W1222 23:56:57.728300  479667 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001041622s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 23:56:57.728358  479667 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1222 23:56:57.728392  479667 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1222 23:56:57.729662  479667 out.go:203] 
	
	
	==> Docker <==
	Dec 22 23:44:46 kubernetes-upgrade-767823 dockerd[914]: time="2025-12-22T23:44:46.681483847Z" level=info msg="Daemon shutdown complete"
	Dec 22 23:44:46 kubernetes-upgrade-767823 dockerd[914]: time="2025-12-22T23:44:46.681539970Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 22 23:44:46 kubernetes-upgrade-767823 systemd[1]: docker.service: Deactivated successfully.
	Dec 22 23:44:46 kubernetes-upgrade-767823 systemd[1]: Stopped docker.service - Docker Application Container Engine.
	Dec 22 23:44:46 kubernetes-upgrade-767823 systemd[1]: Starting docker.service - Docker Application Container Engine...
	Dec 22 23:44:46 kubernetes-upgrade-767823 dockerd[1402]: time="2025-12-22T23:44:46.731996507Z" level=info msg="Starting up"
	Dec 22 23:44:46 kubernetes-upgrade-767823 dockerd[1402]: time="2025-12-22T23:44:46.733188001Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	Dec 22 23:44:46 kubernetes-upgrade-767823 dockerd[1402]: time="2025-12-22T23:44:46.733303403Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	Dec 22 23:44:46 kubernetes-upgrade-767823 dockerd[1402]: time="2025-12-22T23:44:46.733320910Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	Dec 22 23:44:46 kubernetes-upgrade-767823 dockerd[1402]: time="2025-12-22T23:44:46.745752113Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	Dec 22 23:44:46 kubernetes-upgrade-767823 dockerd[1402]: time="2025-12-22T23:44:46.749267121Z" level=info msg="Loading containers: start."
	Dec 22 23:44:46 kubernetes-upgrade-767823 dockerd[1402]: time="2025-12-22T23:44:46.750899794Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Dec 22 23:44:48 kubernetes-upgrade-767823 dockerd[1402]: time="2025-12-22T23:44:48.359752804Z" level=info msg="Restoring containers: start."
	Dec 22 23:44:48 kubernetes-upgrade-767823 dockerd[1402]: time="2025-12-22T23:44:48.387207297Z" level=info msg="Deleting nftables IPv4 rules" error="exit status 1"
	Dec 22 23:44:48 kubernetes-upgrade-767823 dockerd[1402]: time="2025-12-22T23:44:48.405351264Z" level=info msg="Deleting nftables IPv6 rules" error="exit status 1"
	Dec 22 23:44:48 kubernetes-upgrade-767823 dockerd[1402]: time="2025-12-22T23:44:48.944316959Z" level=info msg="Loading containers: done."
	Dec 22 23:44:48 kubernetes-upgrade-767823 dockerd[1402]: time="2025-12-22T23:44:48.958306351Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 22 23:44:48 kubernetes-upgrade-767823 dockerd[1402]: time="2025-12-22T23:44:48.958345664Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 22 23:44:48 kubernetes-upgrade-767823 dockerd[1402]: time="2025-12-22T23:44:48.958397885Z" level=info msg="Initializing buildkit"
	Dec 22 23:44:48 kubernetes-upgrade-767823 dockerd[1402]: time="2025-12-22T23:44:48.981160460Z" level=info msg="Completed buildkit initialization"
	Dec 22 23:44:48 kubernetes-upgrade-767823 dockerd[1402]: time="2025-12-22T23:44:48.988234881Z" level=info msg="Daemon has completed initialization"
	Dec 22 23:44:48 kubernetes-upgrade-767823 dockerd[1402]: time="2025-12-22T23:44:48.988334323Z" level=info msg="API listen on /run/docker.sock"
	Dec 22 23:44:48 kubernetes-upgrade-767823 dockerd[1402]: time="2025-12-22T23:44:48.988351216Z" level=info msg="API listen on [::]:2376"
	Dec 22 23:44:48 kubernetes-upgrade-767823 dockerd[1402]: time="2025-12-22T23:44:48.988369800Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 22 23:44:48 kubernetes-upgrade-767823 systemd[1]: Started docker.service - Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3a 20 ef 34 9e ff 08 06
	[  +2.780094] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e 36 71 18 35 80 08 06
	[  +0.005286] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 7e 85 6b 14 50 db 08 06
	[Dec22 23:49] IPv4: martian source 10.244.0.1 from 10.244.0.7, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 3d 46 1b 4b 15 08 06
	[  +8.285809] IPv4: martian source 10.244.0.1 from 10.244.0.10, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 42 de e5 d5 d2 d6 08 06
	[Dec22 23:50] IPv4: martian source 10.244.0.1 from 10.244.0.8, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 9c 73 09 d8 3c 08 06
	[Dec22 23:51] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff fe dd 45 92 98 69 08 06
	[  +0.005109] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ee 3b 16 0e 30 fb 08 06
	[Dec22 23:52] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6e 26 d0 5e 2a 12 08 06
	[  +0.000315] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ee 3b 16 0e 30 fb 08 06
	[Dec22 23:56] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 46 8d 7a bb 30 f9 08 06
	[ +11.914515] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1e b2 e2 cd c9 e7 08 06
	[  +0.000458] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 46 8d 7a bb 30 f9 08 06
	
	
	==> kernel <==
	 23:56:59 up  3:39,  0 user,  load average: 1.23, 1.42, 1.55
	Linux kubernetes-upgrade-767823 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 23:56:56 kubernetes-upgrade-767823 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 23:56:56 kubernetes-upgrade-767823 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 22 23:56:56 kubernetes-upgrade-767823 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:56:56 kubernetes-upgrade-767823 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:56:56 kubernetes-upgrade-767823 kubelet[25391]: E1222 23:56:56.798641   25391 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 23:56:56 kubernetes-upgrade-767823 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 23:56:56 kubernetes-upgrade-767823 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 23:56:57 kubernetes-upgrade-767823 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 22 23:56:57 kubernetes-upgrade-767823 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:56:57 kubernetes-upgrade-767823 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:56:57 kubernetes-upgrade-767823 kubelet[25485]: E1222 23:56:57.553471   25485 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 23:56:57 kubernetes-upgrade-767823 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 23:56:57 kubernetes-upgrade-767823 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 23:56:58 kubernetes-upgrade-767823 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 22 23:56:58 kubernetes-upgrade-767823 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:56:58 kubernetes-upgrade-767823 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:56:58 kubernetes-upgrade-767823 kubelet[25544]: E1222 23:56:58.299364   25544 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 23:56:58 kubernetes-upgrade-767823 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 23:56:58 kubernetes-upgrade-767823 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 23:56:58 kubernetes-upgrade-767823 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 22 23:56:58 kubernetes-upgrade-767823 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:56:58 kubernetes-upgrade-767823 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:56:59 kubernetes-upgrade-767823 kubelet[25678]: E1222 23:56:59.043404   25678 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 23:56:59 kubernetes-upgrade-767823 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 23:56:59 kubernetes-upgrade-767823 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-767823 -n kubernetes-upgrade-767823
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-767823 -n kubernetes-upgrade-767823: exit status 2 (340.803846ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "kubernetes-upgrade-767823" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "kubernetes-upgrade-767823" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-767823
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-767823: (2.077464728s)
--- FAIL: TestKubernetesUpgrade (782.74s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (503.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-063943 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-rc.1
E1222 23:46:21.954564   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/skaffold-356784/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:46:30.660207   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-580825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p no-preload-063943 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-rc.1: exit status 109 (8m21.893255597s)

                                                
                                                
-- stdout --
	* [no-preload-063943] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22301
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22301-72233/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-72233/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "no-preload-063943" primary control-plane node in "no-preload-063943" cluster
	* Pulling base image v0.0.48-1766394456-22288 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1222 23:45:48.717586  502961 out.go:360] Setting OutFile to fd 1 ...
	I1222 23:45:48.717899  502961 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 23:45:48.717909  502961 out.go:374] Setting ErrFile to fd 2...
	I1222 23:45:48.717913  502961 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 23:45:48.718105  502961 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-72233/.minikube/bin
	I1222 23:45:48.718567  502961 out.go:368] Setting JSON to false
	I1222 23:45:48.719736  502961 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":12489,"bootTime":1766434660,"procs":283,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1222 23:45:48.719793  502961 start.go:143] virtualization: kvm guest
	I1222 23:45:48.721553  502961 out.go:179] * [no-preload-063943] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1222 23:45:48.722650  502961 notify.go:221] Checking for updates...
	I1222 23:45:48.722692  502961 out.go:179]   - MINIKUBE_LOCATION=22301
	I1222 23:45:48.723699  502961 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 23:45:48.724682  502961 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1222 23:45:48.725676  502961 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-72233/.minikube
	I1222 23:45:48.726610  502961 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1222 23:45:48.727491  502961 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 23:45:48.728833  502961 config.go:182] Loaded profile config "cert-expiration-628145": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1222 23:45:48.728946  502961 config.go:182] Loaded profile config "kubernetes-upgrade-767823": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1222 23:45:48.729053  502961 config.go:182] Loaded profile config "old-k8s-version-687073": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I1222 23:45:48.729160  502961 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 23:45:48.753851  502961 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1222 23:45:48.753979  502961 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 23:45:48.817992  502961 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:true NGoroutines:75 SystemTime:2025-12-22 23:45:48.807413821 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1222 23:45:48.818096  502961 docker.go:319] overlay module found
	I1222 23:45:48.819458  502961 out.go:179] * Using the docker driver based on user configuration
	I1222 23:45:48.820450  502961 start.go:309] selected driver: docker
	I1222 23:45:48.820469  502961 start.go:928] validating driver "docker" against <nil>
	I1222 23:45:48.820484  502961 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 23:45:48.821549  502961 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 23:45:48.877719  502961 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:true NGoroutines:75 SystemTime:2025-12-22 23:45:48.868347937 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1222 23:45:48.877921  502961 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1222 23:45:48.878152  502961 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1222 23:45:48.879528  502961 out.go:179] * Using Docker driver with root privileges
	I1222 23:45:48.880391  502961 cni.go:84] Creating CNI manager for ""
	I1222 23:45:48.880467  502961 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1222 23:45:48.880479  502961 start_flags.go:342] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1222 23:45:48.880564  502961 start.go:353] cluster config:
	{Name:no-preload-063943 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-063943 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 23:45:48.881685  502961 out.go:179] * Starting "no-preload-063943" primary control-plane node in "no-preload-063943" cluster
	I1222 23:45:48.882674  502961 cache.go:134] Beginning downloading kic base image for docker with docker
	I1222 23:45:48.883788  502961 out.go:179] * Pulling base image v0.0.48-1766394456-22288 ...
	I1222 23:45:48.884734  502961 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1222 23:45:48.884829  502961 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local docker daemon
	I1222 23:45:48.884838  502961 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/no-preload-063943/config.json ...
	I1222 23:45:48.884930  502961 cache.go:107] acquiring lock: {Name:mka2a7cd00c9ee09fcd67b9fe2b1b7736241aafe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 23:45:48.884946  502961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/no-preload-063943/config.json: {Name:mk81f52fcc3834c94919182ff01139aafaf2c141 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 23:45:48.884977  502961 cache.go:107] acquiring lock: {Name:mk7ec73f502e042fe14942dd4168f5178dfa9f1e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 23:45:48.885000  502961 cache.go:107] acquiring lock: {Name:mk6f1235a31bde2512aad5a6083026ce14993945 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 23:45:48.885039  502961 cache.go:107] acquiring lock: {Name:mk62dd6bd5d525d245831d36e2e60bb4a4c91eaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 23:45:48.885063  502961 cache.go:107] acquiring lock: {Name:mk0f5262807fb5404c75ce06ce5720befe312fb6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 23:45:48.885024  502961 cache.go:115] /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1222 23:45:48.885095  502961 cache.go:107] acquiring lock: {Name:mk06ea69d634e565762c96598011d1945c901ed0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 23:45:48.885127  502961 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 207.616µs
	I1222 23:45:48.885140  502961 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1222 23:45:48.885141  502961 cache.go:115] /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1222 23:45:48.884943  502961 cache.go:107] acquiring lock: {Name:mk804c5f94e18a50ea710125b603ced35b076f37 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 23:45:48.885120  502961 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1222 23:45:48.885159  502961 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 120.94µs
	I1222 23:45:48.885169  502961 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1222 23:45:48.885201  502961 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.6-0
	I1222 23:45:48.885188  502961 cache.go:107] acquiring lock: {Name:mk02371428913253d2a19c8c9a792727a5cd8caa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 23:45:48.885222  502961 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1222 23:45:48.885260  502961 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1222 23:45:48.885314  502961 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1222 23:45:48.885374  502961 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1222 23:45:48.886421  502961 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.6-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.6-0
	I1222 23:45:48.886447  502961 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1222 23:45:48.886445  502961 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1222 23:45:48.886435  502961 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1222 23:45:48.886509  502961 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1222 23:45:48.886561  502961 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1222 23:45:48.908416  502961 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local docker daemon, skipping pull
	I1222 23:45:48.908433  502961 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 exists in daemon, skipping load
	I1222 23:45:48.908454  502961 cache.go:243] Successfully downloaded all kic artifacts
	I1222 23:45:48.908487  502961 start.go:360] acquireMachinesLock for no-preload-063943: {Name:mke00101a1e3840a843a95b7b058ed2d434978f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 23:45:48.908580  502961 start.go:364] duration metric: took 74.947µs to acquireMachinesLock for "no-preload-063943"
	I1222 23:45:48.908622  502961 start.go:93] Provisioning new machine with config: &{Name:no-preload-063943 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-063943 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1222 23:45:48.908696  502961 start.go:125] createHost starting for "" (driver="docker")
	I1222 23:45:48.910372  502961 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1222 23:45:48.910635  502961 start.go:159] libmachine.API.Create for "no-preload-063943" (driver="docker")
	I1222 23:45:48.910667  502961 client.go:173] LocalClient.Create starting
	I1222 23:45:48.910736  502961 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem
	I1222 23:45:48.910768  502961 main.go:144] libmachine: Decoding PEM data...
	I1222 23:45:48.910794  502961 main.go:144] libmachine: Parsing certificate...
	I1222 23:45:48.910855  502961 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem
	I1222 23:45:48.910882  502961 main.go:144] libmachine: Decoding PEM data...
	I1222 23:45:48.910899  502961 main.go:144] libmachine: Parsing certificate...
	I1222 23:45:48.911302  502961 cli_runner.go:164] Run: docker network inspect no-preload-063943 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1222 23:45:48.931277  502961 cli_runner.go:211] docker network inspect no-preload-063943 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1222 23:45:48.931340  502961 network_create.go:284] running [docker network inspect no-preload-063943] to gather additional debugging logs...
	I1222 23:45:48.931362  502961 cli_runner.go:164] Run: docker network inspect no-preload-063943
	W1222 23:45:48.948115  502961 cli_runner.go:211] docker network inspect no-preload-063943 returned with exit code 1
	I1222 23:45:48.948145  502961 network_create.go:287] error running [docker network inspect no-preload-063943]: docker network inspect no-preload-063943: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-063943 not found
	I1222 23:45:48.948159  502961 network_create.go:289] output of [docker network inspect no-preload-063943]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-063943 not found
	
	** /stderr **
	I1222 23:45:48.948271  502961 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 23:45:48.966747  502961 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6d900dc18f14 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3e:30:89:aa:a7:2c} reservation:<nil>}
	I1222 23:45:48.967185  502961 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-52673d7f67eb IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:86:59:44:a0:0a:fb} reservation:<nil>}
	I1222 23:45:48.967627  502961 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f98da515e43c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f6:95:d8:50:7b:ba} reservation:<nil>}
	I1222 23:45:48.968073  502961 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-5f6692e5184d IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ca:79:d3:b1:de:45} reservation:<nil>}
	I1222 23:45:48.968621  502961 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-e7826d3e34de IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:02:42:cd:3f:35:af} reservation:<nil>}
	I1222 23:45:48.969104  502961 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-fd7ae4b550ba IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:ce:80:7d:54:06:fa} reservation:<nil>}
	I1222 23:45:48.969783  502961 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e130e0}
	I1222 23:45:48.969805  502961 network_create.go:124] attempt to create docker network no-preload-063943 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1222 23:45:48.969845  502961 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-063943 no-preload-063943
	I1222 23:45:49.021170  502961 network_create.go:108] docker network no-preload-063943 192.168.103.0/24 created
	I1222 23:45:49.021202  502961 kic.go:121] calculated static IP "192.168.103.2" for the "no-preload-063943" container
	I1222 23:45:49.021269  502961 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1222 23:45:49.040357  502961 cli_runner.go:164] Run: docker volume create no-preload-063943 --label name.minikube.sigs.k8s.io=no-preload-063943 --label created_by.minikube.sigs.k8s.io=true
	I1222 23:45:49.046711  502961 cache.go:162] opening:  /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0
	I1222 23:45:49.049578  502961 cache.go:162] opening:  /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1
	I1222 23:45:49.060276  502961 oci.go:103] Successfully created a docker volume no-preload-063943
	I1222 23:45:49.060359  502961 cli_runner.go:164] Run: docker run --rm --name no-preload-063943-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-063943 --entrypoint /usr/bin/test -v no-preload-063943:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 -d /var/lib
	I1222 23:45:49.097376  502961 cache.go:162] opening:  /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1
	I1222 23:45:49.098160  502961 cache.go:162] opening:  /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1
	I1222 23:45:49.119739  502961 cache.go:162] opening:  /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1
	I1222 23:45:49.170091  502961 cache.go:162] opening:  /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1
	I1222 23:45:49.470743  502961 oci.go:107] Successfully prepared a docker volume no-preload-063943
	I1222 23:45:49.470786  502961 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	W1222 23:45:49.470914  502961 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1222 23:45:49.471040  502961 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1222 23:45:49.514562  502961 cache.go:157] /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1222 23:45:49.514606  502961 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 629.654296ms
	I1222 23:45:49.514623  502961 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1222 23:45:49.539049  502961 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-063943 --name no-preload-063943 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-063943 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-063943 --network no-preload-063943 --ip 192.168.103.2 --volume no-preload-063943:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484
	I1222 23:45:49.818574  502961 cli_runner.go:164] Run: docker container inspect no-preload-063943 --format={{.State.Running}}
	I1222 23:45:49.837061  502961 cli_runner.go:164] Run: docker container inspect no-preload-063943 --format={{.State.Status}}
	I1222 23:45:49.857478  502961 cli_runner.go:164] Run: docker exec no-preload-063943 stat /var/lib/dpkg/alternatives/iptables
	I1222 23:45:49.908905  502961 oci.go:144] the created container "no-preload-063943" has a running status.
	I1222 23:45:49.908937  502961 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22301-72233/.minikube/machines/no-preload-063943/id_rsa...
	I1222 23:45:50.043908  502961 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22301-72233/.minikube/machines/no-preload-063943/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1222 23:45:50.078305  502961 cli_runner.go:164] Run: docker container inspect no-preload-063943 --format={{.State.Status}}
	I1222 23:45:50.099330  502961 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1222 23:45:50.099356  502961 kic_runner.go:114] Args: [docker exec --privileged no-preload-063943 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1222 23:45:50.163535  502961 cli_runner.go:164] Run: docker container inspect no-preload-063943 --format={{.State.Status}}
	I1222 23:45:50.185890  502961 machine.go:94] provisionDockerMachine start ...
	I1222 23:45:50.185990  502961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-063943
	I1222 23:45:50.213558  502961 main.go:144] libmachine: Using SSH client type: native
	I1222 23:45:50.213938  502961 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1222 23:45:50.213965  502961 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 23:45:50.214698  502961 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49854->127.0.0.1:33083: read: connection reset by peer
	I1222 23:45:50.259166  502961 cache.go:157] /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1222 23:45:50.259197  502961 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 1.374220303s
	I1222 23:45:50.259220  502961 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1222 23:45:50.355476  502961 cache.go:157] /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1222 23:45:50.355519  502961 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 1.470498934s
	I1222 23:45:50.355538  502961 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1222 23:45:50.384197  502961 cache.go:157] /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1222 23:45:50.384233  502961 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 1.499096287s
	I1222 23:45:50.384249  502961 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1222 23:45:50.402199  502961 cache.go:157] /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 exists
	I1222 23:45:50.402225  502961 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0" took 1.517253431s
	I1222 23:45:50.402236  502961 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1222 23:45:50.428081  502961 cache.go:157] /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1222 23:45:50.428108  502961 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 1.543066831s
	I1222 23:45:50.428123  502961 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1222 23:45:50.428141  502961 cache.go:87] Successfully saved all images to host disk.
	I1222 23:45:53.366283  502961 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-063943
	
	I1222 23:45:53.366322  502961 ubuntu.go:182] provisioning hostname "no-preload-063943"
	I1222 23:45:53.366416  502961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-063943
	I1222 23:45:53.389137  502961 main.go:144] libmachine: Using SSH client type: native
	I1222 23:45:53.389470  502961 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1222 23:45:53.389488  502961 main.go:144] libmachine: About to run SSH command:
	sudo hostname no-preload-063943 && echo "no-preload-063943" | sudo tee /etc/hostname
	I1222 23:45:53.543566  502961 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-063943
	
	I1222 23:45:53.543672  502961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-063943
	I1222 23:45:53.561848  502961 main.go:144] libmachine: Using SSH client type: native
	I1222 23:45:53.562144  502961 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1222 23:45:53.562173  502961 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-063943' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-063943/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-063943' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 23:45:53.710451  502961 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 23:45:53.710482  502961 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22301-72233/.minikube CaCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22301-72233/.minikube}
	I1222 23:45:53.710504  502961 ubuntu.go:190] setting up certificates
	I1222 23:45:53.710514  502961 provision.go:84] configureAuth start
	I1222 23:45:53.710576  502961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-063943
	I1222 23:45:53.729874  502961 provision.go:143] copyHostCerts
	I1222 23:45:53.729930  502961 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem, removing ...
	I1222 23:45:53.729941  502961 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem
	I1222 23:45:53.729999  502961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem (1679 bytes)
	I1222 23:45:53.730103  502961 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem, removing ...
	I1222 23:45:53.730115  502961 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem
	I1222 23:45:53.730166  502961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem (1082 bytes)
	I1222 23:45:53.730243  502961 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem, removing ...
	I1222 23:45:53.730251  502961 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem
	I1222 23:45:53.730276  502961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem (1123 bytes)
	I1222 23:45:53.730341  502961 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem org=jenkins.no-preload-063943 san=[127.0.0.1 192.168.103.2 localhost minikube no-preload-063943]
	I1222 23:45:53.792876  502961 provision.go:177] copyRemoteCerts
	I1222 23:45:53.792932  502961 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 23:45:53.792989  502961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-063943
	I1222 23:45:53.810946  502961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/no-preload-063943/id_rsa Username:docker}
	I1222 23:45:53.912524  502961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 23:45:53.931833  502961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 23:45:53.948926  502961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1222 23:45:53.968660  502961 provision.go:87] duration metric: took 258.130156ms to configureAuth
	I1222 23:45:53.968686  502961 ubuntu.go:206] setting minikube options for container-runtime
	I1222 23:45:53.968878  502961 config.go:182] Loaded profile config "no-preload-063943": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1222 23:45:53.968933  502961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-063943
	I1222 23:45:53.991091  502961 main.go:144] libmachine: Using SSH client type: native
	I1222 23:45:53.991409  502961 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1222 23:45:53.991424  502961 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1222 23:45:54.139463  502961 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1222 23:45:54.139492  502961 ubuntu.go:71] root file system type: overlay
	I1222 23:45:54.139677  502961 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1222 23:45:54.139762  502961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-063943
	I1222 23:45:54.156984  502961 main.go:144] libmachine: Using SSH client type: native
	I1222 23:45:54.157198  502961 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1222 23:45:54.157258  502961 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1222 23:45:54.325890  502961 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1222 23:45:54.326018  502961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-063943
	I1222 23:45:54.345072  502961 main.go:144] libmachine: Using SSH client type: native
	I1222 23:45:54.345358  502961 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1222 23:45:54.345400  502961 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1222 23:45:55.630730  502961 main.go:144] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-12 14:48:15.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-22 23:45:54.323320264 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1222 23:45:55.630764  502961 machine.go:97] duration metric: took 5.444844785s to provisionDockerMachine
	I1222 23:45:55.630777  502961 client.go:176] duration metric: took 6.720101028s to LocalClient.Create
	I1222 23:45:55.630796  502961 start.go:167] duration metric: took 6.720163689s to libmachine.API.Create "no-preload-063943"
	I1222 23:45:55.630803  502961 start.go:293] postStartSetup for "no-preload-063943" (driver="docker")
	I1222 23:45:55.630814  502961 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 23:45:55.630869  502961 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 23:45:55.630905  502961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-063943
	I1222 23:45:55.648959  502961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/no-preload-063943/id_rsa Username:docker}
	I1222 23:45:55.772418  502961 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 23:45:55.777547  502961 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 23:45:55.777581  502961 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 23:45:55.777623  502961 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-72233/.minikube/addons for local assets ...
	I1222 23:45:55.777710  502961 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-72233/.minikube/files for local assets ...
	I1222 23:45:55.777821  502961 filesync.go:149] local asset: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem -> 758032.pem in /etc/ssl/certs
	I1222 23:45:55.777950  502961 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1222 23:45:55.788276  502961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem --> /etc/ssl/certs/758032.pem (1708 bytes)
	I1222 23:45:55.812939  502961 start.go:296] duration metric: took 182.119352ms for postStartSetup
	I1222 23:45:55.813277  502961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-063943
	I1222 23:45:55.831091  502961 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/no-preload-063943/config.json ...
	I1222 23:45:55.831409  502961 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 23:45:55.831479  502961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-063943
	I1222 23:45:55.848437  502961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/no-preload-063943/id_rsa Username:docker}
	I1222 23:45:55.949752  502961 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 23:45:55.954318  502961 start.go:128] duration metric: took 7.045606114s to createHost
	I1222 23:45:55.954359  502961 start.go:83] releasing machines lock for "no-preload-063943", held for 7.045740989s
	I1222 23:45:55.954443  502961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-063943
	I1222 23:45:55.972498  502961 ssh_runner.go:195] Run: cat /version.json
	I1222 23:45:55.972562  502961 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 23:45:55.972570  502961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-063943
	I1222 23:45:55.972664  502961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-063943
	I1222 23:45:55.990869  502961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/no-preload-063943/id_rsa Username:docker}
	I1222 23:45:55.991929  502961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/no-preload-063943/id_rsa Username:docker}
	I1222 23:45:56.088956  502961 ssh_runner.go:195] Run: systemctl --version
	I1222 23:45:56.148226  502961 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 23:45:56.152969  502961 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 23:45:56.153054  502961 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 23:45:56.178317  502961 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1222 23:45:56.178340  502961 start.go:496] detecting cgroup driver to use...
	I1222 23:45:56.178371  502961 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 23:45:56.178484  502961 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 23:45:56.193575  502961 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1222 23:45:56.203346  502961 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1222 23:45:56.211848  502961 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1222 23:45:56.211902  502961 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1222 23:45:56.220573  502961 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 23:45:56.229139  502961 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1222 23:45:56.237374  502961 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 23:45:56.245668  502961 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 23:45:56.253438  502961 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1222 23:45:56.261658  502961 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1222 23:45:56.269947  502961 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1222 23:45:56.278782  502961 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 23:45:56.285888  502961 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 23:45:56.292863  502961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 23:45:56.376205  502961 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1222 23:45:56.449617  502961 start.go:496] detecting cgroup driver to use...
	I1222 23:45:56.449684  502961 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 23:45:56.449738  502961 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1222 23:45:56.464396  502961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 23:45:56.479333  502961 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1222 23:45:56.499074  502961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 23:45:56.515242  502961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1222 23:45:56.532095  502961 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 23:45:56.550118  502961 ssh_runner.go:195] Run: which cri-dockerd
	I1222 23:45:56.554841  502961 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1222 23:45:56.566189  502961 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1222 23:45:56.582354  502961 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1222 23:45:56.687002  502961 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1222 23:45:56.787850  502961 docker.go:578] configuring docker to use "cgroupfs" as cgroup driver...
	I1222 23:45:56.788010  502961 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1222 23:45:56.803715  502961 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1222 23:45:56.818017  502961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 23:45:56.920627  502961 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1222 23:45:57.662451  502961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 23:45:57.676929  502961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1222 23:45:57.690764  502961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1222 23:45:57.706328  502961 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1222 23:45:57.799847  502961 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1222 23:45:57.889894  502961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 23:45:57.976907  502961 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1222 23:45:57.995851  502961 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1222 23:45:58.010043  502961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 23:45:58.102007  502961 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1222 23:45:58.181295  502961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1222 23:45:58.194760  502961 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1222 23:45:58.194833  502961 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1222 23:45:58.198947  502961 start.go:564] Will wait 60s for crictl version
	I1222 23:45:58.199008  502961 ssh_runner.go:195] Run: which crictl
	I1222 23:45:58.202887  502961 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 23:45:58.229415  502961 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1222 23:45:58.229471  502961 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1222 23:45:58.253947  502961 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1222 23:45:58.283192  502961 out.go:252] * Preparing Kubernetes v1.35.0-rc.1 on Docker 29.1.3 ...
	I1222 23:45:58.283300  502961 cli_runner.go:164] Run: docker network inspect no-preload-063943 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 23:45:58.302893  502961 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1222 23:45:58.307067  502961 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 23:45:58.317915  502961 kubeadm.go:884] updating cluster {Name:no-preload-063943 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-063943 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1222 23:45:58.318056  502961 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1222 23:45:58.318103  502961 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1222 23:45:58.346453  502961 docker.go:694] Got preloaded images: 
	I1222 23:45:58.346476  502961 docker.go:700] registry.k8s.io/kube-apiserver:v1.35.0-rc.1 wasn't preloaded
	I1222 23:45:58.346485  502961 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-rc.1 registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 registry.k8s.io/kube-scheduler:v1.35.0-rc.1 registry.k8s.io/kube-proxy:v1.35.0-rc.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.6-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1222 23:45:58.347713  502961 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1222 23:45:58.348063  502961 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1222 23:45:58.348311  502961 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1222 23:45:58.348457  502961 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1222 23:45:58.348641  502961 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1222 23:45:58.348787  502961 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1222 23:45:58.348925  502961 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1222 23:45:58.349081  502961 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1222 23:45:58.349244  502961 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 23:45:58.349504  502961 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1222 23:45:58.349806  502961 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1222 23:45:58.349822  502961 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.6-0
	I1222 23:45:58.350147  502961 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 23:45:58.350151  502961 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1222 23:45:58.350566  502961 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.6-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.6-0
	I1222 23:45:58.351040  502961 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1222 23:45:58.478794  502961 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1222 23:45:58.491248  502961 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1222 23:45:58.492524  502961 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1222 23:45:58.494669  502961 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1222 23:45:58.496119  502961 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1222 23:45:58.497001  502961 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.6-0
	I1222 23:45:58.508747  502961 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" does not exist at hash "58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce" in container runtime
	I1222 23:45:58.508816  502961 docker.go:341] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1222 23:45:58.508863  502961 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1222 23:45:58.519104  502961 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" does not exist at hash "73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc" in container runtime
	I1222 23:45:58.519417  502961 docker.go:341] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1222 23:45:58.519520  502961 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1222 23:45:58.526197  502961 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139" in container runtime
	I1222 23:45:58.526253  502961 docker.go:341] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1222 23:45:58.526304  502961 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1222 23:45:58.528923  502961 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-rc.1" does not exist at hash "af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a" in container runtime
	I1222 23:45:58.528974  502961 docker.go:341] Removing image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1222 23:45:58.529028  502961 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1222 23:45:58.529103  502961 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" does not exist at hash "5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614" in container runtime
	I1222 23:45:58.529139  502961 docker.go:341] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1222 23:45:58.529174  502961 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1222 23:45:58.531676  502961 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1222 23:45:58.542716  502961 cache_images.go:118] "registry.k8s.io/etcd:3.6.6-0" needs transfer: "registry.k8s.io/etcd:3.6.6-0" does not exist at hash "0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2" in container runtime
	I1222 23:45:58.542761  502961 docker.go:341] Removing image: registry.k8s.io/etcd:3.6.6-0
	I1222 23:45:58.542799  502961 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.6.6-0
	I1222 23:45:58.555507  502961 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1
	I1222 23:45:58.555632  502961 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1222 23:45:58.575250  502961 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1
	I1222 23:45:58.575378  502961 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1222 23:45:58.575507  502961 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1
	I1222 23:45:58.575585  502961 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1222 23:45:58.575665  502961 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1
	I1222 23:45:58.575757  502961 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1222 23:45:58.579220  502961 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1
	I1222 23:45:58.579286  502961 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1222 23:45:58.579298  502961 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1222 23:45:58.579334  502961 docker.go:341] Removing image: registry.k8s.io/pause:3.10.1
	I1222 23:45:58.579375  502961 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.10.1
	I1222 23:45:58.645612  502961 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0
	I1222 23:45:58.645663  502961 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1': No such file or directory
	I1222 23:45:58.645703  502961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1 (27697152 bytes)
	I1222 23:45:58.645710  502961 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0
	I1222 23:45:58.645723  502961 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1': No such file or directory
	I1222 23:45:58.645750  502961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1 (17248256 bytes)
	I1222 23:45:58.645783  502961 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1222 23:45:58.645812  502961 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1': No such file or directory
	I1222 23:45:58.645822  502961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (23562752 bytes)
	I1222 23:45:58.645833  502961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1 (23144960 bytes)
	I1222 23:45:58.692448  502961 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-rc.1': No such file or directory
	I1222 23:45:58.692494  502961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1 (25791488 bytes)
	I1222 23:45:58.692554  502961 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.6-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.6-0': No such file or directory
	I1222 23:45:58.692572  502961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 --> /var/lib/minikube/images/etcd_3.6.6-0 (23653376 bytes)
	I1222 23:45:58.692701  502961 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1222 23:45:58.692805  502961 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1222 23:45:58.776621  502961 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1222 23:45:58.776684  502961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1222 23:45:58.884914  502961 docker.go:308] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1222 23:45:58.884957  502961 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.10.1 | docker load"
	I1222 23:45:58.984734  502961 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1222 23:45:59.002172  502961 docker.go:308] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1222 23:45:59.002248  502961 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1 | docker load"
	I1222 23:46:00.006563  502961 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1 | docker load": (1.004290739s)
	I1222 23:46:00.006588  502961 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 from cache
	I1222 23:46:00.006631  502961 docker.go:308] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1222 23:46:00.006648  502961 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1 | docker load"
	I1222 23:46:00.648399  502961 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 from cache
	I1222 23:46:00.648442  502961 docker.go:308] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1222 23:46:00.648458  502961 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.13.1 | docker load"
	I1222 23:46:01.302258  502961 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1222 23:46:01.302310  502961 docker.go:308] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1222 23:46:01.302335  502961 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1 | docker load"
	I1222 23:46:01.317713  502961 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 23:46:02.033369  502961 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 from cache
	I1222 23:46:02.033409  502961 docker.go:308] Loading image: /var/lib/minikube/images/etcd_3.6.6-0
	I1222 23:46:02.033433  502961 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.6.6-0 | docker load"
	I1222 23:46:02.033493  502961 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1222 23:46:02.033537  502961 docker.go:341] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 23:46:02.033584  502961 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 23:46:02.958802  502961 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 from cache
	I1222 23:46:02.958866  502961 docker.go:308] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1222 23:46:02.958880  502961 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1222 23:46:02.958897  502961 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1 | docker load"
	I1222 23:46:02.958951  502961 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1222 23:46:03.905232  502961 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 from cache
	I1222 23:46:03.905326  502961 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1222 23:46:03.905359  502961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1222 23:46:03.958916  502961 docker.go:308] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1222 23:46:03.958949  502961 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1222 23:46:04.222331  502961 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1222 23:46:04.222369  502961 cache_images.go:125] Successfully loaded all cached images
	I1222 23:46:04.222376  502961 cache_images.go:94] duration metric: took 5.87587502s to LoadCachedImages
	I1222 23:46:04.222394  502961 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.35.0-rc.1 docker true true} ...
	I1222 23:46:04.222499  502961 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-063943 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-063943 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 23:46:04.222561  502961 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1222 23:46:04.271843  502961 cni.go:84] Creating CNI manager for ""
	I1222 23:46:04.271877  502961 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1222 23:46:04.271896  502961 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 23:46:04.271927  502961 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-063943 NodeName:no-preload-063943 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 23:46:04.272120  502961 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "no-preload-063943"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 23:46:04.272193  502961 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 23:46:04.280558  502961 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-rc.1': No such file or directory
	
	Initiating transfer...
	I1222 23:46:04.280636  502961 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 23:46:04.288395  502961 download.go:114] Downloading: https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/22301-72233/.minikube/cache/linux/amd64/v1.35.0-rc.1/kubelet
	I1222 23:46:04.288425  502961 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubectl.sha256
	I1222 23:46:04.288447  502961 download.go:114] Downloading: https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/22301-72233/.minikube/cache/linux/amd64/v1.35.0-rc.1/kubeadm
	I1222 23:46:04.288508  502961 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl
	I1222 23:46:04.292641  502961 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubectl': No such file or directory
	I1222 23:46:04.292672  502961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/cache/linux/amd64/v1.35.0-rc.1/kubectl --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl (58597560 bytes)
	I1222 23:46:05.683472  502961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 23:46:05.696540  502961 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet
	I1222 23:46:05.700776  502961 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet': No such file or directory
	I1222 23:46:05.700817  502961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/cache/linux/amd64/v1.35.0-rc.1/kubelet --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet (58110244 bytes)
	I1222 23:46:05.792674  502961 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm
	I1222 23:46:05.799073  502961 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm': No such file or directory
	I1222 23:46:05.799114  502961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/cache/linux/amd64/v1.35.0-rc.1/kubeadm --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm (72368312 bytes)
	I1222 23:46:06.001856  502961 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 23:46:06.010042  502961 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1222 23:46:06.022502  502961 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 23:46:06.034986  502961 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2226 bytes)
	I1222 23:46:06.047120  502961 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1222 23:46:06.050765  502961 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 23:46:06.061104  502961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 23:46:06.154863  502961 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 23:46:06.184835  502961 certs.go:69] Setting up /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/no-preload-063943 for IP: 192.168.103.2
	I1222 23:46:06.184859  502961 certs.go:195] generating shared ca certs ...
	I1222 23:46:06.184877  502961 certs.go:227] acquiring lock for ca certs: {Name:mk952cc8302daab7c0050aedd5db4002f6808128 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 23:46:06.185039  502961 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.key
	I1222 23:46:06.185095  502961 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.key
	I1222 23:46:06.185109  502961 certs.go:257] generating profile certs ...
	I1222 23:46:06.185179  502961 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/no-preload-063943/client.key
	I1222 23:46:06.185199  502961 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/no-preload-063943/client.crt with IP's: []
	I1222 23:46:06.236611  502961 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/no-preload-063943/client.crt ...
	I1222 23:46:06.236637  502961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/no-preload-063943/client.crt: {Name:mk604350cab291cb29581e4873f1d8d5c8554a8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 23:46:06.236800  502961 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/no-preload-063943/client.key ...
	I1222 23:46:06.236811  502961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/no-preload-063943/client.key: {Name:mkdba665d926f7b4ae84b777790810f50d655b3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 23:46:06.236916  502961 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/no-preload-063943/apiserver.key.9af7d729
	I1222 23:46:06.236934  502961 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/no-preload-063943/apiserver.crt.9af7d729 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1222 23:46:06.390152  502961 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/no-preload-063943/apiserver.crt.9af7d729 ...
	I1222 23:46:06.390188  502961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/no-preload-063943/apiserver.crt.9af7d729: {Name:mk78096c3594c5f503554479c2f640b695873c62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 23:46:06.390416  502961 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/no-preload-063943/apiserver.key.9af7d729 ...
	I1222 23:46:06.390438  502961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/no-preload-063943/apiserver.key.9af7d729: {Name:mk3ea05b5b1fb89ffee28c2c1cf11f18e25b63ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 23:46:06.390534  502961 certs.go:382] copying /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/no-preload-063943/apiserver.crt.9af7d729 -> /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/no-preload-063943/apiserver.crt
	I1222 23:46:06.390643  502961 certs.go:386] copying /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/no-preload-063943/apiserver.key.9af7d729 -> /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/no-preload-063943/apiserver.key
	I1222 23:46:06.390748  502961 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/no-preload-063943/proxy-client.key
	I1222 23:46:06.390784  502961 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/no-preload-063943/proxy-client.crt with IP's: []
	I1222 23:46:06.424911  502961 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/no-preload-063943/proxy-client.crt ...
	I1222 23:46:06.424940  502961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/no-preload-063943/proxy-client.crt: {Name:mk7762cc9545d3fb89a0114bf7dbc95987aad2ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 23:46:06.425112  502961 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/no-preload-063943/proxy-client.key ...
	I1222 23:46:06.425137  502961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/no-preload-063943/proxy-client.key: {Name:mk0c8a300b8f2abee4ffb1d4a07254f15e8c8568 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 23:46:06.425425  502961 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803.pem (1338 bytes)
	W1222 23:46:06.425492  502961 certs.go:480] ignoring /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803_empty.pem, impossibly tiny 0 bytes
	I1222 23:46:06.425510  502961 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem (1675 bytes)
	I1222 23:46:06.425555  502961 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem (1082 bytes)
	I1222 23:46:06.425619  502961 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem (1123 bytes)
	I1222 23:46:06.425664  502961 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem (1679 bytes)
	I1222 23:46:06.425747  502961 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem (1708 bytes)
	I1222 23:46:06.426745  502961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 23:46:06.445725  502961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 23:46:06.465127  502961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 23:46:06.483448  502961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 23:46:06.501467  502961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/no-preload-063943/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 23:46:06.519914  502961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/no-preload-063943/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 23:46:06.538402  502961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/no-preload-063943/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 23:46:06.557476  502961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/no-preload-063943/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1222 23:46:06.577844  502961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem --> /usr/share/ca-certificates/758032.pem (1708 bytes)
	I1222 23:46:06.600642  502961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 23:46:06.618814  502961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803.pem --> /usr/share/ca-certificates/75803.pem (1338 bytes)
	I1222 23:46:06.637360  502961 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 23:46:06.651132  502961 ssh_runner.go:195] Run: openssl version
	I1222 23:46:06.657436  502961 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/758032.pem
	I1222 23:46:06.666272  502961 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/758032.pem /etc/ssl/certs/758032.pem
	I1222 23:46:06.674972  502961 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/758032.pem
	I1222 23:46:06.679117  502961 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 22:42 /usr/share/ca-certificates/758032.pem
	I1222 23:46:06.679168  502961 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/758032.pem
	I1222 23:46:06.715024  502961 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 23:46:06.723411  502961 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/758032.pem /etc/ssl/certs/3ec20f2e.0
	I1222 23:46:06.731891  502961 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 23:46:06.740347  502961 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 23:46:06.747974  502961 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 23:46:06.752465  502961 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 22:33 /usr/share/ca-certificates/minikubeCA.pem
	I1222 23:46:06.752519  502961 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 23:46:06.791459  502961 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 23:46:06.801083  502961 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1222 23:46:06.809063  502961 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/75803.pem
	I1222 23:46:06.817117  502961 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/75803.pem /etc/ssl/certs/75803.pem
	I1222 23:46:06.825637  502961 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75803.pem
	I1222 23:46:06.829768  502961 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 22:42 /usr/share/ca-certificates/75803.pem
	I1222 23:46:06.829838  502961 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75803.pem
	I1222 23:46:06.869776  502961 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 23:46:06.877859  502961 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/75803.pem /etc/ssl/certs/51391683.0
	I1222 23:46:06.887175  502961 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 23:46:06.891099  502961 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1222 23:46:06.891161  502961 kubeadm.go:401] StartCluster: {Name:no-preload-063943 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-063943 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 23:46:06.891276  502961 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1222 23:46:06.910400  502961 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 23:46:06.918523  502961 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 23:46:06.926087  502961 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 23:46:06.926140  502961 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 23:46:06.933959  502961 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 23:46:06.933979  502961 kubeadm.go:158] found existing configuration files:
	
	I1222 23:46:06.934021  502961 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1222 23:46:06.941631  502961 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 23:46:06.941679  502961 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 23:46:06.948567  502961 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1222 23:46:06.956238  502961 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 23:46:06.956285  502961 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 23:46:06.963573  502961 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1222 23:46:06.971136  502961 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 23:46:06.971174  502961 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 23:46:06.978336  502961 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1222 23:46:06.986464  502961 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 23:46:06.986515  502961 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 23:46:06.994160  502961 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 23:46:07.105520  502961 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1222 23:46:07.106035  502961 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 23:46:07.162225  502961 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 23:50:08.871219  502961 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1222 23:50:08.871257  502961 kubeadm.go:319] 
	I1222 23:50:08.871329  502961 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1222 23:50:08.874902  502961 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 23:50:08.874999  502961 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 23:50:08.875118  502961 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 23:50:08.875250  502961 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1222 23:50:08.875319  502961 kubeadm.go:319] OS: Linux
	I1222 23:50:08.875390  502961 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 23:50:08.875478  502961 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 23:50:08.875562  502961 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 23:50:08.875644  502961 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 23:50:08.875724  502961 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 23:50:08.875794  502961 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 23:50:08.875864  502961 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 23:50:08.875933  502961 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 23:50:08.876008  502961 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 23:50:08.876119  502961 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 23:50:08.876265  502961 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 23:50:08.876395  502961 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 23:50:08.876510  502961 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 23:50:08.879672  502961 out.go:252]   - Generating certificates and keys ...
	I1222 23:50:08.879769  502961 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 23:50:08.879860  502961 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 23:50:08.879963  502961 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1222 23:50:08.880045  502961 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1222 23:50:08.880131  502961 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1222 23:50:08.880199  502961 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1222 23:50:08.880271  502961 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1222 23:50:08.880475  502961 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-063943] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1222 23:50:08.880557  502961 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1222 23:50:08.880764  502961 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-063943] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1222 23:50:08.880865  502961 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1222 23:50:08.880954  502961 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1222 23:50:08.881018  502961 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1222 23:50:08.881098  502961 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 23:50:08.881179  502961 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 23:50:08.881253  502961 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 23:50:08.881352  502961 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 23:50:08.881455  502961 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 23:50:08.881540  502961 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 23:50:08.881670  502961 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 23:50:08.881761  502961 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 23:50:08.882758  502961 out.go:252]   - Booting up control plane ...
	I1222 23:50:08.882876  502961 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 23:50:08.882986  502961 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 23:50:08.883079  502961 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 23:50:08.883225  502961 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 23:50:08.883369  502961 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 23:50:08.883500  502961 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 23:50:08.883634  502961 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 23:50:08.883695  502961 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 23:50:08.883856  502961 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 23:50:08.883988  502961 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 23:50:08.884077  502961 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001138504s
	I1222 23:50:08.884086  502961 kubeadm.go:319] 
	I1222 23:50:08.884159  502961 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 23:50:08.884218  502961 kubeadm.go:319] 	- The kubelet is not running
	I1222 23:50:08.884366  502961 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 23:50:08.884376  502961 kubeadm.go:319] 
	I1222 23:50:08.884528  502961 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 23:50:08.884574  502961 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 23:50:08.884634  502961 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 23:50:08.884668  502961 kubeadm.go:319] 
	W1222 23:50:08.884806  502961 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-063943] and IPs [192.168.103.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-063943] and IPs [192.168.103.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001138504s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-063943] and IPs [192.168.103.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-063943] and IPs [192.168.103.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001138504s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1222 23:50:08.884890  502961 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1222 23:50:09.301587  502961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 23:50:09.314549  502961 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 23:50:09.314632  502961 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 23:50:09.322515  502961 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 23:50:09.322532  502961 kubeadm.go:158] found existing configuration files:
	
	I1222 23:50:09.322579  502961 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1222 23:50:09.330083  502961 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 23:50:09.330142  502961 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 23:50:09.337705  502961 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1222 23:50:09.345389  502961 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 23:50:09.345442  502961 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 23:50:09.353143  502961 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1222 23:50:09.361155  502961 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 23:50:09.361205  502961 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 23:50:09.368361  502961 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1222 23:50:09.375696  502961 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 23:50:09.375742  502961 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 23:50:09.382934  502961 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 23:50:09.424501  502961 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 23:50:09.424576  502961 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 23:50:09.500573  502961 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 23:50:09.500686  502961 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1222 23:50:09.500748  502961 kubeadm.go:319] OS: Linux
	I1222 23:50:09.500820  502961 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 23:50:09.500899  502961 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 23:50:09.500980  502961 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 23:50:09.501027  502961 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 23:50:09.501070  502961 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 23:50:09.501120  502961 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 23:50:09.501174  502961 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 23:50:09.501264  502961 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 23:50:09.501350  502961 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 23:50:09.573354  502961 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 23:50:09.573497  502961 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 23:50:09.573649  502961 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 23:50:09.585727  502961 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 23:50:09.588039  502961 out.go:252]   - Generating certificates and keys ...
	I1222 23:50:09.588139  502961 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 23:50:09.588224  502961 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 23:50:09.588329  502961 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1222 23:50:09.588414  502961 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1222 23:50:09.588506  502961 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1222 23:50:09.588584  502961 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1222 23:50:09.588711  502961 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1222 23:50:09.588807  502961 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1222 23:50:09.588911  502961 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1222 23:50:09.589008  502961 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1222 23:50:09.589064  502961 kubeadm.go:319] [certs] Using the existing "sa" key
	I1222 23:50:09.589131  502961 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 23:50:09.670452  502961 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 23:50:09.714335  502961 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 23:50:09.825257  502961 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 23:50:09.838857  502961 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 23:50:10.020669  502961 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 23:50:10.021233  502961 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 23:50:10.023323  502961 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 23:50:10.024962  502961 out.go:252]   - Booting up control plane ...
	I1222 23:50:10.025060  502961 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 23:50:10.025172  502961 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 23:50:10.025273  502961 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 23:50:10.046040  502961 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 23:50:10.046178  502961 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 23:50:10.055567  502961 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 23:50:10.056063  502961 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 23:50:10.056123  502961 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 23:50:10.176247  502961 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 23:50:10.176396  502961 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 23:54:10.177108  502961 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000357414s
	I1222 23:54:10.177165  502961 kubeadm.go:319] 
	I1222 23:54:10.177310  502961 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 23:54:10.177524  502961 kubeadm.go:319] 	- The kubelet is not running
	I1222 23:54:10.177810  502961 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 23:54:10.177828  502961 kubeadm.go:319] 
	I1222 23:54:10.178077  502961 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 23:54:10.178152  502961 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 23:54:10.178219  502961 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 23:54:10.178231  502961 kubeadm.go:319] 
	I1222 23:54:10.180123  502961 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1222 23:54:10.180949  502961 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 23:54:10.181073  502961 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 23:54:10.181313  502961 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1222 23:54:10.181322  502961 kubeadm.go:319] 
	I1222 23:54:10.181426  502961 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1222 23:54:10.181463  502961 kubeadm.go:403] duration metric: took 8m3.290305083s to StartCluster
	I1222 23:54:10.181529  502961 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 23:54:10.181635  502961 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 23:54:10.218237  502961 cri.go:96] found id: ""
	I1222 23:54:10.218275  502961 logs.go:282] 0 containers: []
	W1222 23:54:10.218287  502961 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:54:10.218296  502961 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 23:54:10.218354  502961 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 23:54:10.243757  502961 cri.go:96] found id: ""
	I1222 23:54:10.243787  502961 logs.go:282] 0 containers: []
	W1222 23:54:10.243799  502961 logs.go:284] No container was found matching "etcd"
	I1222 23:54:10.243808  502961 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 23:54:10.243868  502961 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 23:54:10.273079  502961 cri.go:96] found id: ""
	I1222 23:54:10.273107  502961 logs.go:282] 0 containers: []
	W1222 23:54:10.273120  502961 logs.go:284] No container was found matching "coredns"
	I1222 23:54:10.273129  502961 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 23:54:10.273204  502961 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 23:54:10.299807  502961 cri.go:96] found id: ""
	I1222 23:54:10.299834  502961 logs.go:282] 0 containers: []
	W1222 23:54:10.299846  502961 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:54:10.299855  502961 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 23:54:10.299907  502961 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 23:54:10.324887  502961 cri.go:96] found id: ""
	I1222 23:54:10.324911  502961 logs.go:282] 0 containers: []
	W1222 23:54:10.324919  502961 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:54:10.324926  502961 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 23:54:10.324980  502961 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 23:54:10.348741  502961 cri.go:96] found id: ""
	I1222 23:54:10.348766  502961 logs.go:282] 0 containers: []
	W1222 23:54:10.348775  502961 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:54:10.348783  502961 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 23:54:10.348838  502961 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 23:54:10.372916  502961 cri.go:96] found id: ""
	I1222 23:54:10.372942  502961 logs.go:282] 0 containers: []
	W1222 23:54:10.372952  502961 logs.go:284] No container was found matching "kindnet"
	I1222 23:54:10.372966  502961 logs.go:123] Gathering logs for container status ...
	I1222 23:54:10.372982  502961 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:54:10.399951  502961 logs.go:123] Gathering logs for kubelet ...
	I1222 23:54:10.399977  502961 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:54:10.448425  502961 logs.go:123] Gathering logs for dmesg ...
	I1222 23:54:10.448459  502961 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:54:10.468561  502961 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:54:10.468588  502961 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:54:10.524190  502961 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:54:10.517433    9601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:54:10.517931    9601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:54:10.519492    9601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:54:10.519894    9601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:54:10.521397    9601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:54:10.517433    9601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:54:10.517931    9601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:54:10.519492    9601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:54:10.519894    9601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:54:10.521397    9601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:54:10.524222  502961 logs.go:123] Gathering logs for Docker ...
	I1222 23:54:10.524236  502961 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1222 23:54:10.545693  502961 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000357414s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1222 23:54:10.545755  502961 out.go:285] * 
	* 
	W1222 23:54:10.545816  502961 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000357414s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000357414s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 23:54:10.545830  502961 out.go:285] * 
	* 
	W1222 23:54:10.546079  502961 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 23:54:10.548818  502961 out.go:203] 
	W1222 23:54:10.549859  502961 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000357414s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000357414s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 23:54:10.549906  502961 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1222 23:54:10.549926  502961 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1222 23:54:10.551014  502961 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p no-preload-063943 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-rc.1": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-063943
helpers_test.go:244: (dbg) docker inspect no-preload-063943:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "786df4b777717287f11f0ef2eab8115dad6a21597d5995b3b84e35ed2328cebc",
	        "Created": "2025-12-22T23:45:49.557145486Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 503452,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T23:45:49.595623184Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9a87e850a5e640dd3e5f71477885272b970ba271e3722be8bebbe0157f704ffd",
	        "ResolvConfPath": "/var/lib/docker/containers/786df4b777717287f11f0ef2eab8115dad6a21597d5995b3b84e35ed2328cebc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/786df4b777717287f11f0ef2eab8115dad6a21597d5995b3b84e35ed2328cebc/hostname",
	        "HostsPath": "/var/lib/docker/containers/786df4b777717287f11f0ef2eab8115dad6a21597d5995b3b84e35ed2328cebc/hosts",
	        "LogPath": "/var/lib/docker/containers/786df4b777717287f11f0ef2eab8115dad6a21597d5995b3b84e35ed2328cebc/786df4b777717287f11f0ef2eab8115dad6a21597d5995b3b84e35ed2328cebc-json.log",
	        "Name": "/no-preload-063943",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-063943:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-063943",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "786df4b777717287f11f0ef2eab8115dad6a21597d5995b3b84e35ed2328cebc",
	                "LowerDir": "/var/lib/docker/overlay2/29902a9fc8792c76fa85dc5a0de0b07f3c2e185c6d971af2f6ebff298763d0a3-init/diff:/var/lib/docker/overlay2/c57dd1a41102d99c4ed6be3c60b871435428bd2cea6a3d8d172f0a67527ba009/diff",
	                "MergedDir": "/var/lib/docker/overlay2/29902a9fc8792c76fa85dc5a0de0b07f3c2e185c6d971af2f6ebff298763d0a3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/29902a9fc8792c76fa85dc5a0de0b07f3c2e185c6d971af2f6ebff298763d0a3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/29902a9fc8792c76fa85dc5a0de0b07f3c2e185c6d971af2f6ebff298763d0a3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-063943",
	                "Source": "/var/lib/docker/volumes/no-preload-063943/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-063943",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-063943",
	                "name.minikube.sigs.k8s.io": "no-preload-063943",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "b12aa3b274c1526f59343d87f9f299a4f40a5ab395883334ecfec940090bf65a",
	            "SandboxKey": "/var/run/docker/netns/b12aa3b274c1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-063943": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6fe1a4d651e77a6056be2344adfa00e0a1474c8d315239814c9f2b4594dd53fd",
	                    "EndpointID": "3b7f033df37f355a43561609b2804995167974287179a0903251f6f85150dc35",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "6e:80:ed:cd:a5:e1",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-063943",
	                        "786df4b77771"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-063943 -n no-preload-063943
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-063943 -n no-preload-063943: exit status 6 (299.205318ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1222 23:54:10.924177  598443 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-063943" does not appear in /home/jenkins/minikube-integration/22301-72233/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-063943 logs -n 25
E1222 23:54:11.507496   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/default-k8s-diff-port-700304/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:261: TestStartStop/group/no-preload/serial/FirstStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │    PROFILE     │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p kindnet-003676 sudo systemctl status kubelet --all --full --no-pager                                                                  │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:53 UTC │ 22 Dec 25 23:53 UTC │
	│ ssh     │ -p kindnet-003676 sudo systemctl cat kubelet --no-pager                                                                                  │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:53 UTC │ 22 Dec 25 23:53 UTC │
	│ ssh     │ -p kindnet-003676 sudo journalctl -xeu kubelet --all --full --no-pager                                                                   │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:53 UTC │ 22 Dec 25 23:53 UTC │
	│ ssh     │ -p kindnet-003676 sudo cat /etc/kubernetes/kubelet.conf                                                                                  │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:53 UTC │ 22 Dec 25 23:53 UTC │
	│ ssh     │ -p kindnet-003676 sudo cat /var/lib/kubelet/config.yaml                                                                                  │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:53 UTC │ 22 Dec 25 23:53 UTC │
	│ ssh     │ -p kindnet-003676 sudo systemctl status docker --all --full --no-pager                                                                   │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:53 UTC │ 22 Dec 25 23:53 UTC │
	│ ssh     │ -p kindnet-003676 sudo systemctl cat docker --no-pager                                                                                   │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:53 UTC │ 22 Dec 25 23:53 UTC │
	│ ssh     │ -p kindnet-003676 sudo cat /etc/docker/daemon.json                                                                                       │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:53 UTC │ 22 Dec 25 23:53 UTC │
	│ ssh     │ -p kindnet-003676 sudo docker system info                                                                                                │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:53 UTC │ 22 Dec 25 23:53 UTC │
	│ ssh     │ -p kindnet-003676 sudo systemctl status cri-docker --all --full --no-pager                                                               │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:53 UTC │ 22 Dec 25 23:53 UTC │
	│ ssh     │ -p kindnet-003676 sudo systemctl cat cri-docker --no-pager                                                                               │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:53 UTC │ 22 Dec 25 23:53 UTC │
	│ ssh     │ -p kindnet-003676 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                          │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:53 UTC │ 22 Dec 25 23:53 UTC │
	│ ssh     │ -p kindnet-003676 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                    │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:53 UTC │ 22 Dec 25 23:53 UTC │
	│ ssh     │ -p kindnet-003676 sudo cri-dockerd --version                                                                                             │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:53 UTC │ 22 Dec 25 23:53 UTC │
	│ ssh     │ -p kindnet-003676 sudo systemctl status containerd --all --full --no-pager                                                               │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:53 UTC │ 22 Dec 25 23:53 UTC │
	│ ssh     │ -p kindnet-003676 sudo systemctl cat containerd --no-pager                                                                               │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:53 UTC │ 22 Dec 25 23:53 UTC │
	│ ssh     │ -p kindnet-003676 sudo cat /lib/systemd/system/containerd.service                                                                        │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:53 UTC │ 22 Dec 25 23:53 UTC │
	│ ssh     │ -p kindnet-003676 sudo cat /etc/containerd/config.toml                                                                                   │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:53 UTC │ 22 Dec 25 23:53 UTC │
	│ ssh     │ -p kindnet-003676 sudo containerd config dump                                                                                            │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:53 UTC │ 22 Dec 25 23:53 UTC │
	│ ssh     │ -p kindnet-003676 sudo systemctl status crio --all --full --no-pager                                                                     │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:53 UTC │                     │
	│ ssh     │ -p kindnet-003676 sudo systemctl cat crio --no-pager                                                                                     │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:53 UTC │ 22 Dec 25 23:54 UTC │
	│ ssh     │ -p kindnet-003676 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                           │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:54 UTC │ 22 Dec 25 23:54 UTC │
	│ ssh     │ -p kindnet-003676 sudo crio config                                                                                                       │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:54 UTC │ 22 Dec 25 23:54 UTC │
	│ delete  │ -p kindnet-003676                                                                                                                        │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:54 UTC │ 22 Dec 25 23:54 UTC │
	│ start   │ -p calico-003676 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker │ calico-003676  │ jenkins │ v1.37.0 │ 22 Dec 25 23:54 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 23:54:03
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 23:54:03.188210  596624 out.go:360] Setting OutFile to fd 1 ...
	I1222 23:54:03.188498  596624 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 23:54:03.188509  596624 out.go:374] Setting ErrFile to fd 2...
	I1222 23:54:03.188513  596624 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 23:54:03.188776  596624 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-72233/.minikube/bin
	I1222 23:54:03.189284  596624 out.go:368] Setting JSON to false
	I1222 23:54:03.190632  596624 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":12983,"bootTime":1766434660,"procs":270,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1222 23:54:03.190715  596624 start.go:143] virtualization: kvm guest
	I1222 23:54:03.192553  596624 out.go:179] * [calico-003676] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1222 23:54:03.193777  596624 out.go:179]   - MINIKUBE_LOCATION=22301
	I1222 23:54:03.193793  596624 notify.go:221] Checking for updates...
	I1222 23:54:03.196130  596624 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 23:54:03.197099  596624 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1222 23:54:03.198050  596624 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-72233/.minikube
	I1222 23:54:03.199098  596624 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1222 23:54:03.200096  596624 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 23:54:03.201403  596624 config.go:182] Loaded profile config "kubernetes-upgrade-767823": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1222 23:54:03.201498  596624 config.go:182] Loaded profile config "newest-cni-348344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1222 23:54:03.201564  596624 config.go:182] Loaded profile config "no-preload-063943": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1222 23:54:03.201680  596624 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 23:54:03.224281  596624 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1222 23:54:03.224368  596624 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 23:54:03.285405  596624 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:74 SystemTime:2025-12-22 23:54:03.274504901 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1222 23:54:03.285511  596624 docker.go:319] overlay module found
	I1222 23:54:03.287152  596624 out.go:179] * Using the docker driver based on user configuration
	I1222 23:54:03.288332  596624 start.go:309] selected driver: docker
	I1222 23:54:03.288365  596624 start.go:928] validating driver "docker" against <nil>
	I1222 23:54:03.288386  596624 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 23:54:03.289640  596624 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 23:54:03.345546  596624 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:74 SystemTime:2025-12-22 23:54:03.335879526 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1222 23:54:03.345788  596624 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1222 23:54:03.345984  596624 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1222 23:54:03.347515  596624 out.go:179] * Using Docker driver with root privileges
	I1222 23:54:03.348454  596624 cni.go:84] Creating CNI manager for "calico"
	I1222 23:54:03.348470  596624 start_flags.go:342] Found "Calico" CNI - setting NetworkPlugin=cni
	I1222 23:54:03.348524  596624 start.go:353] cluster config:
	{Name:calico-003676 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:calico-003676 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 23:54:03.349641  596624 out.go:179] * Starting "calico-003676" primary control-plane node in "calico-003676" cluster
	I1222 23:54:03.350623  596624 cache.go:134] Beginning downloading kic base image for docker with docker
	I1222 23:54:03.351623  596624 out.go:179] * Pulling base image v0.0.48-1766394456-22288 ...
	I1222 23:54:03.352613  596624 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime docker
	I1222 23:54:03.352648  596624 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4
	I1222 23:54:03.352664  596624 cache.go:65] Caching tarball of preloaded images
	I1222 23:54:03.352694  596624 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local docker daemon
	I1222 23:54:03.352746  596624 preload.go:251] Found /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1222 23:54:03.352758  596624 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on docker
	I1222 23:54:03.352883  596624 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/calico-003676/config.json ...
	I1222 23:54:03.352905  596624 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/calico-003676/config.json: {Name:mk5ed9418edb4de606d096fb81b7cc611725239f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 23:54:03.372552  596624 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local docker daemon, skipping pull
	I1222 23:54:03.372570  596624 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 exists in daemon, skipping load
	I1222 23:54:03.372584  596624 cache.go:243] Successfully downloaded all kic artifacts
	I1222 23:54:03.372651  596624 start.go:360] acquireMachinesLock for calico-003676: {Name:mk3d3711ac04e83fbd9b0eaa9538d6de80a1d211 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 23:54:03.372748  596624 start.go:364] duration metric: took 75.94µs to acquireMachinesLock for "calico-003676"
	I1222 23:54:03.372770  596624 start.go:93] Provisioning new machine with config: &{Name:calico-003676 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:calico-003676 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1222 23:54:03.372832  596624 start.go:125] createHost starting for "" (driver="docker")
	I1222 23:54:03.374984  596624 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1222 23:54:03.375157  596624 start.go:159] libmachine.API.Create for "calico-003676" (driver="docker")
	I1222 23:54:03.375183  596624 client.go:173] LocalClient.Create starting
	I1222 23:54:03.375301  596624 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem
	I1222 23:54:03.375336  596624 main.go:144] libmachine: Decoding PEM data...
	I1222 23:54:03.375352  596624 main.go:144] libmachine: Parsing certificate...
	I1222 23:54:03.375401  596624 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem
	I1222 23:54:03.375419  596624 main.go:144] libmachine: Decoding PEM data...
	I1222 23:54:03.375434  596624 main.go:144] libmachine: Parsing certificate...
	I1222 23:54:03.375786  596624 cli_runner.go:164] Run: docker network inspect calico-003676 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1222 23:54:03.391406  596624 cli_runner.go:211] docker network inspect calico-003676 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1222 23:54:03.391470  596624 network_create.go:284] running [docker network inspect calico-003676] to gather additional debugging logs...
	I1222 23:54:03.391494  596624 cli_runner.go:164] Run: docker network inspect calico-003676
	W1222 23:54:03.407573  596624 cli_runner.go:211] docker network inspect calico-003676 returned with exit code 1
	I1222 23:54:03.407617  596624 network_create.go:287] error running [docker network inspect calico-003676]: docker network inspect calico-003676: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-003676 not found
	I1222 23:54:03.407632  596624 network_create.go:289] output of [docker network inspect calico-003676]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-003676 not found
	
	** /stderr **
	I1222 23:54:03.407755  596624 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 23:54:03.424283  596624 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6d900dc18f14 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3e:30:89:aa:a7:2c} reservation:<nil>}
	I1222 23:54:03.424826  596624 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-52673d7f67eb IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:86:59:44:a0:0a:fb} reservation:<nil>}
	I1222 23:54:03.425396  596624 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f98da515e43c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f6:95:d8:50:7b:ba} reservation:<nil>}
	I1222 23:54:03.426007  596624 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-5f6692e5184d IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ca:79:d3:b1:de:45} reservation:<nil>}
	I1222 23:54:03.426798  596624 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f96810}
	I1222 23:54:03.426829  596624 network_create.go:124] attempt to create docker network calico-003676 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1222 23:54:03.426873  596624 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-003676 calico-003676
	I1222 23:54:03.472775  596624 network_create.go:108] docker network calico-003676 192.168.85.0/24 created
	I1222 23:54:03.472821  596624 kic.go:121] calculated static IP "192.168.85.2" for the "calico-003676" container
	I1222 23:54:03.472900  596624 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1222 23:54:03.489480  596624 cli_runner.go:164] Run: docker volume create calico-003676 --label name.minikube.sigs.k8s.io=calico-003676 --label created_by.minikube.sigs.k8s.io=true
	I1222 23:54:03.508864  596624 oci.go:103] Successfully created a docker volume calico-003676
	I1222 23:54:03.508964  596624 cli_runner.go:164] Run: docker run --rm --name calico-003676-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-003676 --entrypoint /usr/bin/test -v calico-003676:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 -d /var/lib
	I1222 23:54:03.886644  596624 oci.go:107] Successfully prepared a docker volume calico-003676
	I1222 23:54:03.886731  596624 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime docker
	I1222 23:54:03.886747  596624 kic.go:194] Starting extracting preloaded images to volume ...
	I1222 23:54:03.886827  596624 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-003676:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 -I lz4 -xf /preloaded.tar -C /extractDir
	I1222 23:54:07.245968  596624 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-003676:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 -I lz4 -xf /preloaded.tar -C /extractDir: (3.359084159s)
	I1222 23:54:07.246000  596624 kic.go:203] duration metric: took 3.359250344s to extract preloaded images to volume ...
	W1222 23:54:07.246121  596624 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1222 23:54:07.246213  596624 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1222 23:54:07.304342  596624 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-003676 --name calico-003676 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-003676 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-003676 --network calico-003676 --ip 192.168.85.2 --volume calico-003676:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484
	I1222 23:54:07.554410  596624 cli_runner.go:164] Run: docker container inspect calico-003676 --format={{.State.Running}}
	I1222 23:54:07.572329  596624 cli_runner.go:164] Run: docker container inspect calico-003676 --format={{.State.Status}}
	I1222 23:54:07.590271  596624 cli_runner.go:164] Run: docker exec calico-003676 stat /var/lib/dpkg/alternatives/iptables
	I1222 23:54:07.633613  596624 oci.go:144] the created container "calico-003676" has a running status.
	I1222 23:54:07.633644  596624 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22301-72233/.minikube/machines/calico-003676/id_rsa...
	I1222 23:54:07.693520  596624 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22301-72233/.minikube/machines/calico-003676/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1222 23:54:07.720269  596624 cli_runner.go:164] Run: docker container inspect calico-003676 --format={{.State.Status}}
	I1222 23:54:07.737996  596624 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1222 23:54:07.738023  596624 kic_runner.go:114] Args: [docker exec --privileged calico-003676 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1222 23:54:07.790368  596624 cli_runner.go:164] Run: docker container inspect calico-003676 --format={{.State.Status}}
	I1222 23:54:07.809086  596624 machine.go:94] provisionDockerMachine start ...
	I1222 23:54:07.809217  596624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-003676
	I1222 23:54:07.828259  596624 main.go:144] libmachine: Using SSH client type: native
	I1222 23:54:07.828636  596624 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1222 23:54:07.828657  596624 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 23:54:07.829479  596624 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56752->127.0.0.1:33128: read: connection reset by peer
	I1222 23:54:10.177108  502961 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000357414s
	I1222 23:54:10.177165  502961 kubeadm.go:319] 
	I1222 23:54:10.177310  502961 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 23:54:10.177524  502961 kubeadm.go:319] 	- The kubelet is not running
	I1222 23:54:10.177810  502961 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 23:54:10.177828  502961 kubeadm.go:319] 
	I1222 23:54:10.178077  502961 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 23:54:10.178152  502961 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 23:54:10.178219  502961 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 23:54:10.178231  502961 kubeadm.go:319] 
	I1222 23:54:10.180123  502961 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1222 23:54:10.180949  502961 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 23:54:10.181073  502961 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 23:54:10.181313  502961 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1222 23:54:10.181322  502961 kubeadm.go:319] 
	I1222 23:54:10.181426  502961 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1222 23:54:10.181463  502961 kubeadm.go:403] duration metric: took 8m3.290305083s to StartCluster
	I1222 23:54:10.181529  502961 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 23:54:10.181635  502961 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 23:54:10.218237  502961 cri.go:96] found id: ""
	I1222 23:54:10.218275  502961 logs.go:282] 0 containers: []
	W1222 23:54:10.218287  502961 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:54:10.218296  502961 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 23:54:10.218354  502961 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 23:54:10.243757  502961 cri.go:96] found id: ""
	I1222 23:54:10.243787  502961 logs.go:282] 0 containers: []
	W1222 23:54:10.243799  502961 logs.go:284] No container was found matching "etcd"
	I1222 23:54:10.243808  502961 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 23:54:10.243868  502961 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 23:54:10.273079  502961 cri.go:96] found id: ""
	I1222 23:54:10.273107  502961 logs.go:282] 0 containers: []
	W1222 23:54:10.273120  502961 logs.go:284] No container was found matching "coredns"
	I1222 23:54:10.273129  502961 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 23:54:10.273204  502961 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 23:54:10.299807  502961 cri.go:96] found id: ""
	I1222 23:54:10.299834  502961 logs.go:282] 0 containers: []
	W1222 23:54:10.299846  502961 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:54:10.299855  502961 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 23:54:10.299907  502961 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 23:54:10.324887  502961 cri.go:96] found id: ""
	I1222 23:54:10.324911  502961 logs.go:282] 0 containers: []
	W1222 23:54:10.324919  502961 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:54:10.324926  502961 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 23:54:10.324980  502961 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 23:54:10.348741  502961 cri.go:96] found id: ""
	I1222 23:54:10.348766  502961 logs.go:282] 0 containers: []
	W1222 23:54:10.348775  502961 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:54:10.348783  502961 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 23:54:10.348838  502961 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 23:54:10.372916  502961 cri.go:96] found id: ""
	I1222 23:54:10.372942  502961 logs.go:282] 0 containers: []
	W1222 23:54:10.372952  502961 logs.go:284] No container was found matching "kindnet"
	I1222 23:54:10.372966  502961 logs.go:123] Gathering logs for container status ...
	I1222 23:54:10.372982  502961 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:54:10.399951  502961 logs.go:123] Gathering logs for kubelet ...
	I1222 23:54:10.399977  502961 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:54:10.448425  502961 logs.go:123] Gathering logs for dmesg ...
	I1222 23:54:10.448459  502961 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:54:10.468561  502961 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:54:10.468588  502961 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:54:10.524190  502961 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:54:10.517433    9601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:54:10.517931    9601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:54:10.519492    9601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:54:10.519894    9601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:54:10.521397    9601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:54:10.517433    9601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:54:10.517931    9601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:54:10.519492    9601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:54:10.519894    9601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:54:10.521397    9601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:54:10.524222  502961 logs.go:123] Gathering logs for Docker ...
	I1222 23:54:10.524236  502961 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1222 23:54:10.545693  502961 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000357414s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1222 23:54:10.545755  502961 out.go:285] * 
	W1222 23:54:10.545816  502961 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000357414s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 23:54:10.545830  502961 out.go:285] * 
	W1222 23:54:10.546079  502961 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 23:54:10.548818  502961 out.go:203] 
	W1222 23:54:10.549859  502961 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000357414s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 23:54:10.549906  502961 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1222 23:54:10.549926  502961 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1222 23:54:10.551014  502961 out.go:203] 
	
	
	==> Docker <==
	Dec 22 23:45:57 no-preload-063943 dockerd[1158]: time="2025-12-22T23:45:57.067952621Z" level=info msg="Restoring containers: start."
	Dec 22 23:45:57 no-preload-063943 dockerd[1158]: time="2025-12-22T23:45:57.087206382Z" level=info msg="Deleting nftables IPv4 rules" error="exit status 1"
	Dec 22 23:45:57 no-preload-063943 dockerd[1158]: time="2025-12-22T23:45:57.105276522Z" level=info msg="Deleting nftables IPv6 rules" error="exit status 1"
	Dec 22 23:45:57 no-preload-063943 dockerd[1158]: time="2025-12-22T23:45:57.608862562Z" level=info msg="Loading containers: done."
	Dec 22 23:45:57 no-preload-063943 dockerd[1158]: time="2025-12-22T23:45:57.622047370Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 22 23:45:57 no-preload-063943 dockerd[1158]: time="2025-12-22T23:45:57.622108955Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 22 23:45:57 no-preload-063943 dockerd[1158]: time="2025-12-22T23:45:57.622142828Z" level=info msg="Initializing buildkit"
	Dec 22 23:45:57 no-preload-063943 dockerd[1158]: time="2025-12-22T23:45:57.654446856Z" level=info msg="Completed buildkit initialization"
	Dec 22 23:45:57 no-preload-063943 dockerd[1158]: time="2025-12-22T23:45:57.660274243Z" level=info msg="Daemon has completed initialization"
	Dec 22 23:45:57 no-preload-063943 dockerd[1158]: time="2025-12-22T23:45:57.660348571Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 22 23:45:57 no-preload-063943 dockerd[1158]: time="2025-12-22T23:45:57.660422079Z" level=info msg="API listen on [::]:2376"
	Dec 22 23:45:57 no-preload-063943 dockerd[1158]: time="2025-12-22T23:45:57.660348899Z" level=info msg="API listen on /run/docker.sock"
	Dec 22 23:45:57 no-preload-063943 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 22 23:45:58 no-preload-063943 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 22 23:45:58 no-preload-063943 cri-dockerd[1447]: time="2025-12-22T23:45:58Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 22 23:45:58 no-preload-063943 cri-dockerd[1447]: time="2025-12-22T23:45:58Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 22 23:45:58 no-preload-063943 cri-dockerd[1447]: time="2025-12-22T23:45:58Z" level=info msg="Start docker client with request timeout 0s"
	Dec 22 23:45:58 no-preload-063943 cri-dockerd[1447]: time="2025-12-22T23:45:58Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 22 23:45:58 no-preload-063943 cri-dockerd[1447]: time="2025-12-22T23:45:58Z" level=info msg="Loaded network plugin cni"
	Dec 22 23:45:58 no-preload-063943 cri-dockerd[1447]: time="2025-12-22T23:45:58Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 22 23:45:58 no-preload-063943 cri-dockerd[1447]: time="2025-12-22T23:45:58Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 22 23:45:58 no-preload-063943 cri-dockerd[1447]: time="2025-12-22T23:45:58Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 22 23:45:58 no-preload-063943 cri-dockerd[1447]: time="2025-12-22T23:45:58Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 22 23:45:58 no-preload-063943 cri-dockerd[1447]: time="2025-12-22T23:45:58Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 22 23:45:58 no-preload-063943 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:54:11.454017    9741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:54:11.454640    9741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:54:11.456159    9741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:54:11.456572    9741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:54:11.458078    9741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff b2 e0 b3 e5 fd 05 08 06
	[Dec22 23:48] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a2 5e 7d 42 4c be 08 06
	[ +37.051500] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ee 9d 29 a8 c7 7e 08 06
	[  +0.046977] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3a 20 ef 34 9e ff 08 06
	[  +2.780094] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e 36 71 18 35 80 08 06
	[  +0.005286] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 7e 85 6b 14 50 db 08 06
	[Dec22 23:49] IPv4: martian source 10.244.0.1 from 10.244.0.7, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 3d 46 1b 4b 15 08 06
	[  +8.285809] IPv4: martian source 10.244.0.1 from 10.244.0.10, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 42 de e5 d5 d2 d6 08 06
	[Dec22 23:50] IPv4: martian source 10.244.0.1 from 10.244.0.8, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 9c 73 09 d8 3c 08 06
	[Dec22 23:51] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff fe dd 45 92 98 69 08 06
	[  +0.005109] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ee 3b 16 0e 30 fb 08 06
	[Dec22 23:52] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6e 26 d0 5e 2a 12 08 06
	[  +0.000315] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ee 3b 16 0e 30 fb 08 06
	
	
	==> kernel <==
	 23:54:11 up  3:36,  0 user,  load average: 0.75, 1.66, 1.65
	Linux no-preload-063943 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 23:54:08 no-preload-063943 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:54:08 no-preload-063943 kubelet[9454]: E1222 23:54:08.537794    9454 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 23:54:08 no-preload-063943 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 23:54:08 no-preload-063943 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 23:54:09 no-preload-063943 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
	Dec 22 23:54:09 no-preload-063943 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:54:09 no-preload-063943 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:54:09 no-preload-063943 kubelet[9466]: E1222 23:54:09.287083    9466 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 23:54:09 no-preload-063943 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 23:54:09 no-preload-063943 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 23:54:09 no-preload-063943 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 22 23:54:09 no-preload-063943 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:54:09 no-preload-063943 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:54:10 no-preload-063943 kubelet[9477]: E1222 23:54:10.041515    9477 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 23:54:10 no-preload-063943 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 23:54:10 no-preload-063943 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 23:54:10 no-preload-063943 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 22 23:54:10 no-preload-063943 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:54:10 no-preload-063943 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:54:10 no-preload-063943 kubelet[9613]: E1222 23:54:10.784635    9613 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 23:54:10 no-preload-063943 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 23:54:10 no-preload-063943 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 23:54:11 no-preload-063943 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 22 23:54:11 no-preload-063943 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:54:11 no-preload-063943 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-063943 -n no-preload-063943
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-063943 -n no-preload-063943: exit status 6 (308.227073ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1222 23:54:11.827635  598925 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-063943" does not appear in /home/jenkins/minikube-integration/22301-72233/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-063943" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (503.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (498.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-348344 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-rc.1
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p newest-cni-348344 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-rc.1: exit status 109 (8m16.774212423s)

                                                
                                                
-- stdout --
	* [newest-cni-348344] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22301
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22301-72233/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-72233/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "newest-cni-348344" primary control-plane node in "newest-cni-348344" cluster
	* Pulling base image v0.0.48-1766394456-22288 ...
	  - kubeadm.pod-network-cidr=10.42.0.0/16
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1222 23:50:41.056123  562224 out.go:360] Setting OutFile to fd 1 ...
	I1222 23:50:41.056243  562224 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 23:50:41.056255  562224 out.go:374] Setting ErrFile to fd 2...
	I1222 23:50:41.056262  562224 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 23:50:41.056439  562224 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-72233/.minikube/bin
	I1222 23:50:41.056922  562224 out.go:368] Setting JSON to false
	I1222 23:50:41.058192  562224 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":12781,"bootTime":1766434660,"procs":315,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1222 23:50:41.058249  562224 start.go:143] virtualization: kvm guest
	I1222 23:50:41.060198  562224 out.go:179] * [newest-cni-348344] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1222 23:50:41.061464  562224 out.go:179]   - MINIKUBE_LOCATION=22301
	I1222 23:50:41.061451  562224 notify.go:221] Checking for updates...
	I1222 23:50:41.062788  562224 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 23:50:41.063925  562224 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1222 23:50:41.065049  562224 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-72233/.minikube
	I1222 23:50:41.066076  562224 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1222 23:50:41.067226  562224 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 23:50:41.069128  562224 config.go:182] Loaded profile config "embed-certs-142613": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1222 23:50:41.069287  562224 config.go:182] Loaded profile config "kubernetes-upgrade-767823": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1222 23:50:41.069430  562224 config.go:182] Loaded profile config "no-preload-063943": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1222 23:50:41.069557  562224 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 23:50:41.095356  562224 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1222 23:50:41.095473  562224 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 23:50:41.156076  562224 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:true NGoroutines:74 SystemTime:2025-12-22 23:50:41.146470655 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1222 23:50:41.156186  562224 docker.go:319] overlay module found
	I1222 23:50:41.157911  562224 out.go:179] * Using the docker driver based on user configuration
	I1222 23:50:41.158980  562224 start.go:309] selected driver: docker
	I1222 23:50:41.158997  562224 start.go:928] validating driver "docker" against <nil>
	I1222 23:50:41.159012  562224 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 23:50:41.160161  562224 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 23:50:41.213750  562224 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:true NGoroutines:74 SystemTime:2025-12-22 23:50:41.203230014 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1222 23:50:41.213911  562224 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	W1222 23:50:41.213962  562224 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1222 23:50:41.214195  562224 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1222 23:50:41.216429  562224 out.go:179] * Using Docker driver with root privileges
	I1222 23:50:41.217496  562224 cni.go:84] Creating CNI manager for ""
	I1222 23:50:41.217563  562224 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1222 23:50:41.217573  562224 start_flags.go:342] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1222 23:50:41.217664  562224 start.go:353] cluster config:
	{Name:newest-cni-348344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-348344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 23:50:41.218949  562224 out.go:179] * Starting "newest-cni-348344" primary control-plane node in "newest-cni-348344" cluster
	I1222 23:50:41.220023  562224 cache.go:134] Beginning downloading kic base image for docker with docker
	I1222 23:50:41.221389  562224 out.go:179] * Pulling base image v0.0.48-1766394456-22288 ...
	I1222 23:50:41.222451  562224 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1222 23:50:41.222482  562224 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4
	I1222 23:50:41.222494  562224 cache.go:65] Caching tarball of preloaded images
	I1222 23:50:41.222548  562224 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local docker daemon
	I1222 23:50:41.222568  562224 preload.go:251] Found /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1222 23:50:41.222575  562224 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on docker
	I1222 23:50:41.222707  562224 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/config.json ...
	I1222 23:50:41.222734  562224 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/config.json: {Name:mk51aea80005abf11df60c3fb4ab58fce04a5089 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 23:50:41.243248  562224 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local docker daemon, skipping pull
	I1222 23:50:41.243271  562224 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 exists in daemon, skipping load
	I1222 23:50:41.243292  562224 cache.go:243] Successfully downloaded all kic artifacts
	I1222 23:50:41.243333  562224 start.go:360] acquireMachinesLock for newest-cni-348344: {Name:mk26cd248e0bcd2d8f2e8a824868ba7de6c9c6f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 23:50:41.243457  562224 start.go:364] duration metric: took 99.409µs to acquireMachinesLock for "newest-cni-348344"
	I1222 23:50:41.243487  562224 start.go:93] Provisioning new machine with config: &{Name:newest-cni-348344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-348344 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1222 23:50:41.243566  562224 start.go:125] createHost starting for "" (driver="docker")
	I1222 23:50:41.245618  562224 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1222 23:50:41.245813  562224 start.go:159] libmachine.API.Create for "newest-cni-348344" (driver="docker")
	I1222 23:50:41.245841  562224 client.go:173] LocalClient.Create starting
	I1222 23:50:41.245923  562224 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem
	I1222 23:50:41.245951  562224 main.go:144] libmachine: Decoding PEM data...
	I1222 23:50:41.245967  562224 main.go:144] libmachine: Parsing certificate...
	I1222 23:50:41.246035  562224 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem
	I1222 23:50:41.246058  562224 main.go:144] libmachine: Decoding PEM data...
	I1222 23:50:41.246073  562224 main.go:144] libmachine: Parsing certificate...
	I1222 23:50:41.246372  562224 cli_runner.go:164] Run: docker network inspect newest-cni-348344 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1222 23:50:41.265367  562224 cli_runner.go:211] docker network inspect newest-cni-348344 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1222 23:50:41.265467  562224 network_create.go:284] running [docker network inspect newest-cni-348344] to gather additional debugging logs...
	I1222 23:50:41.265494  562224 cli_runner.go:164] Run: docker network inspect newest-cni-348344
	W1222 23:50:41.284742  562224 cli_runner.go:211] docker network inspect newest-cni-348344 returned with exit code 1
	I1222 23:50:41.284779  562224 network_create.go:287] error running [docker network inspect newest-cni-348344]: docker network inspect newest-cni-348344: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-348344 not found
	I1222 23:50:41.284819  562224 network_create.go:289] output of [docker network inspect newest-cni-348344]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-348344 not found
	
	** /stderr **
	I1222 23:50:41.284937  562224 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 23:50:41.304876  562224 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6d900dc18f14 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3e:30:89:aa:a7:2c} reservation:<nil>}
	I1222 23:50:41.305703  562224 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-52673d7f67eb IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:86:59:44:a0:0a:fb} reservation:<nil>}
	I1222 23:50:41.306453  562224 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f98da515e43c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f6:95:d8:50:7b:ba} reservation:<nil>}
	I1222 23:50:41.307257  562224 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-5f6692e5184d IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ca:79:d3:b1:de:45} reservation:<nil>}
	I1222 23:50:41.307769  562224 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-937b6c2685c7 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:2e:88:b1:79:83:24} reservation:<nil>}
	I1222 23:50:41.308556  562224 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f13fc0}
	I1222 23:50:41.308601  562224 network_create.go:124] attempt to create docker network newest-cni-348344 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1222 23:50:41.308676  562224 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-348344 newest-cni-348344
	I1222 23:50:41.356700  562224 network_create.go:108] docker network newest-cni-348344 192.168.94.0/24 created
	I1222 23:50:41.356734  562224 kic.go:121] calculated static IP "192.168.94.2" for the "newest-cni-348344" container
	I1222 23:50:41.356831  562224 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1222 23:50:41.373553  562224 cli_runner.go:164] Run: docker volume create newest-cni-348344 --label name.minikube.sigs.k8s.io=newest-cni-348344 --label created_by.minikube.sigs.k8s.io=true
	I1222 23:50:41.391109  562224 oci.go:103] Successfully created a docker volume newest-cni-348344
	I1222 23:50:41.391202  562224 cli_runner.go:164] Run: docker run --rm --name newest-cni-348344-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-348344 --entrypoint /usr/bin/test -v newest-cni-348344:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 -d /var/lib
	I1222 23:50:41.787660  562224 oci.go:107] Successfully prepared a docker volume newest-cni-348344
	I1222 23:50:41.787727  562224 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1222 23:50:41.787741  562224 kic.go:194] Starting extracting preloaded images to volume ...
	I1222 23:50:41.787807  562224 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-348344:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 -I lz4 -xf /preloaded.tar -C /extractDir
	I1222 23:50:45.053793  562224 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-348344:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 -I lz4 -xf /preloaded.tar -C /extractDir: (3.265944487s)
	I1222 23:50:45.053835  562224 kic.go:203] duration metric: took 3.266090605s to extract preloaded images to volume ...
	W1222 23:50:45.053968  562224 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1222 23:50:45.054136  562224 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1222 23:50:45.109422  562224 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-348344 --name newest-cni-348344 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-348344 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-348344 --network newest-cni-348344 --ip 192.168.94.2 --volume newest-cni-348344:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484
	I1222 23:50:45.374286  562224 cli_runner.go:164] Run: docker container inspect newest-cni-348344 --format={{.State.Running}}
	I1222 23:50:45.392440  562224 cli_runner.go:164] Run: docker container inspect newest-cni-348344 --format={{.State.Status}}
	I1222 23:50:45.411438  562224 cli_runner.go:164] Run: docker exec newest-cni-348344 stat /var/lib/dpkg/alternatives/iptables
	I1222 23:50:45.460231  562224 oci.go:144] the created container "newest-cni-348344" has a running status.
	I1222 23:50:45.460283  562224 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22301-72233/.minikube/machines/newest-cni-348344/id_rsa...
	I1222 23:50:45.645793  562224 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22301-72233/.minikube/machines/newest-cni-348344/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1222 23:50:45.678285  562224 cli_runner.go:164] Run: docker container inspect newest-cni-348344 --format={{.State.Status}}
	I1222 23:50:45.696497  562224 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1222 23:50:45.696516  562224 kic_runner.go:114] Args: [docker exec --privileged newest-cni-348344 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1222 23:50:45.734845  562224 cli_runner.go:164] Run: docker container inspect newest-cni-348344 --format={{.State.Status}}
	I1222 23:50:45.759426  562224 machine.go:94] provisionDockerMachine start ...
	I1222 23:50:45.759538  562224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1222 23:50:45.788911  562224 main.go:144] libmachine: Using SSH client type: native
	I1222 23:50:45.789268  562224 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1222 23:50:45.789292  562224 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 23:50:45.790799  562224 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59706->127.0.0.1:33113: read: connection reset by peer
	I1222 23:50:48.935487  562224 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-348344
	
	I1222 23:50:48.935522  562224 ubuntu.go:182] provisioning hostname "newest-cni-348344"
	I1222 23:50:48.935588  562224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1222 23:50:48.954184  562224 main.go:144] libmachine: Using SSH client type: native
	I1222 23:50:48.954487  562224 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1222 23:50:48.954506  562224 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-348344 && echo "newest-cni-348344" | sudo tee /etc/hostname
	I1222 23:50:49.106242  562224 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-348344
	
	I1222 23:50:49.106322  562224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1222 23:50:49.124027  562224 main.go:144] libmachine: Using SSH client type: native
	I1222 23:50:49.124248  562224 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1222 23:50:49.124265  562224 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-348344' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-348344/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-348344' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 23:50:49.266107  562224 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 23:50:49.266139  562224 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22301-72233/.minikube CaCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22301-72233/.minikube}
	I1222 23:50:49.266170  562224 ubuntu.go:190] setting up certificates
	I1222 23:50:49.266178  562224 provision.go:84] configureAuth start
	I1222 23:50:49.266232  562224 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-348344
	I1222 23:50:49.283483  562224 provision.go:143] copyHostCerts
	I1222 23:50:49.283562  562224 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem, removing ...
	I1222 23:50:49.283580  562224 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem
	I1222 23:50:49.283679  562224 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem (1082 bytes)
	I1222 23:50:49.283784  562224 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem, removing ...
	I1222 23:50:49.283794  562224 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem
	I1222 23:50:49.283822  562224 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem (1123 bytes)
	I1222 23:50:49.283890  562224 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem, removing ...
	I1222 23:50:49.283897  562224 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem
	I1222 23:50:49.283923  562224 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem (1679 bytes)
	I1222 23:50:49.283984  562224 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem org=jenkins.newest-cni-348344 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-348344]
	I1222 23:50:49.352460  562224 provision.go:177] copyRemoteCerts
	I1222 23:50:49.352529  562224 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 23:50:49.352570  562224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1222 23:50:49.370457  562224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/newest-cni-348344/id_rsa Username:docker}
	I1222 23:50:49.470508  562224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 23:50:49.489085  562224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1222 23:50:49.506367  562224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 23:50:49.527184  562224 provision.go:87] duration metric: took 260.988374ms to configureAuth
	I1222 23:50:49.527219  562224 ubuntu.go:206] setting minikube options for container-runtime
	I1222 23:50:49.527442  562224 config.go:182] Loaded profile config "newest-cni-348344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1222 23:50:49.527499  562224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1222 23:50:49.547262  562224 main.go:144] libmachine: Using SSH client type: native
	I1222 23:50:49.547486  562224 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1222 23:50:49.547499  562224 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1222 23:50:49.689030  562224 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1222 23:50:49.689055  562224 ubuntu.go:71] root file system type: overlay
	I1222 23:50:49.689217  562224 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1222 23:50:49.689305  562224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1222 23:50:49.707859  562224 main.go:144] libmachine: Using SSH client type: native
	I1222 23:50:49.708114  562224 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1222 23:50:49.708221  562224 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1222 23:50:49.860808  562224 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1222 23:50:49.860916  562224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1222 23:50:49.878135  562224 main.go:144] libmachine: Using SSH client type: native
	I1222 23:50:49.878379  562224 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1222 23:50:49.878414  562224 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1222 23:50:50.997486  562224 main.go:144] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-12 14:48:15.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-22 23:50:49.858895479 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1222 23:50:50.997519  562224 machine.go:97] duration metric: took 5.23806737s to provisionDockerMachine
	I1222 23:50:50.997536  562224 client.go:176] duration metric: took 9.751686268s to LocalClient.Create
	I1222 23:50:50.997562  562224 start.go:167] duration metric: took 9.751749191s to libmachine.API.Create "newest-cni-348344"
	I1222 23:50:50.997574  562224 start.go:293] postStartSetup for "newest-cni-348344" (driver="docker")
	I1222 23:50:50.997604  562224 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 23:50:50.997672  562224 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 23:50:50.997733  562224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1222 23:50:51.020082  562224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/newest-cni-348344/id_rsa Username:docker}
	I1222 23:50:51.129005  562224 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 23:50:51.132440  562224 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 23:50:51.132465  562224 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 23:50:51.132476  562224 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-72233/.minikube/addons for local assets ...
	I1222 23:50:51.132530  562224 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-72233/.minikube/files for local assets ...
	I1222 23:50:51.132640  562224 filesync.go:149] local asset: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem -> 758032.pem in /etc/ssl/certs
	I1222 23:50:51.132746  562224 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1222 23:50:51.140236  562224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem --> /etc/ssl/certs/758032.pem (1708 bytes)
	I1222 23:50:51.159683  562224 start.go:296] duration metric: took 162.092934ms for postStartSetup
	I1222 23:50:51.160014  562224 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-348344
	I1222 23:50:51.178856  562224 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/config.json ...
	I1222 23:50:51.179088  562224 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 23:50:51.179130  562224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1222 23:50:51.195215  562224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/newest-cni-348344/id_rsa Username:docker}
	I1222 23:50:51.292609  562224 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 23:50:51.297275  562224 start.go:128] duration metric: took 10.053691003s to createHost
	I1222 23:50:51.297300  562224 start.go:83] releasing machines lock for "newest-cni-348344", held for 10.053828991s
	I1222 23:50:51.297373  562224 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-348344
	I1222 23:50:51.315763  562224 ssh_runner.go:195] Run: cat /version.json
	I1222 23:50:51.315820  562224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1222 23:50:51.315863  562224 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 23:50:51.315952  562224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1222 23:50:51.332845  562224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/newest-cni-348344/id_rsa Username:docker}
	I1222 23:50:51.334910  562224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/newest-cni-348344/id_rsa Username:docker}
	I1222 23:50:51.430274  562224 ssh_runner.go:195] Run: systemctl --version
	I1222 23:50:51.485454  562224 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 23:50:51.490238  562224 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 23:50:51.490294  562224 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 23:50:51.514221  562224 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1222 23:50:51.514249  562224 start.go:496] detecting cgroup driver to use...
	I1222 23:50:51.514284  562224 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 23:50:51.514402  562224 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 23:50:51.528278  562224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1222 23:50:51.538007  562224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1222 23:50:51.546368  562224 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1222 23:50:51.546415  562224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1222 23:50:51.554518  562224 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 23:50:51.562744  562224 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1222 23:50:51.570630  562224 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 23:50:51.579094  562224 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 23:50:51.586673  562224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1222 23:50:51.594998  562224 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1222 23:50:51.603097  562224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1222 23:50:51.611529  562224 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 23:50:51.618362  562224 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 23:50:51.625195  562224 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 23:50:51.706674  562224 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1222 23:50:51.787162  562224 start.go:496] detecting cgroup driver to use...
	I1222 23:50:51.787215  562224 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 23:50:51.787265  562224 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1222 23:50:51.801449  562224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 23:50:51.813347  562224 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1222 23:50:51.828971  562224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 23:50:51.840251  562224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1222 23:50:51.851694  562224 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 23:50:51.864859  562224 ssh_runner.go:195] Run: which cri-dockerd
	I1222 23:50:51.868234  562224 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1222 23:50:51.876672  562224 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1222 23:50:51.888727  562224 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1222 23:50:51.970471  562224 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1222 23:50:52.063410  562224 docker.go:578] configuring docker to use "cgroupfs" as cgroup driver...
	I1222 23:50:52.063533  562224 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1222 23:50:52.077746  562224 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1222 23:50:52.089362  562224 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 23:50:52.168454  562224 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1222 23:50:52.887697  562224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 23:50:52.900106  562224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1222 23:50:52.913134  562224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1222 23:50:52.924969  562224 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1222 23:50:53.011700  562224 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1222 23:50:53.097335  562224 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 23:50:53.179384  562224 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1222 23:50:53.204553  562224 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1222 23:50:53.216253  562224 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 23:50:53.309208  562224 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1222 23:50:53.383876  562224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1222 23:50:53.396620  562224 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1222 23:50:53.396681  562224 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1222 23:50:53.400390  562224 start.go:564] Will wait 60s for crictl version
	I1222 23:50:53.400452  562224 ssh_runner.go:195] Run: which crictl
	I1222 23:50:53.403918  562224 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 23:50:53.428016  562224 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1222 23:50:53.428080  562224 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1222 23:50:53.452347  562224 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1222 23:50:53.478435  562224 out.go:252] * Preparing Kubernetes v1.35.0-rc.1 on Docker 29.1.3 ...
	I1222 23:50:53.478519  562224 cli_runner.go:164] Run: docker network inspect newest-cni-348344 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 23:50:53.495658  562224 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1222 23:50:53.499766  562224 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 23:50:53.512103  562224 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1222 23:50:53.513036  562224 kubeadm.go:884] updating cluster {Name:newest-cni-348344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-348344 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1222 23:50:53.513186  562224 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1222 23:50:53.513235  562224 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1222 23:50:53.534556  562224 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1222 23:50:53.534574  562224 docker.go:624] Images already preloaded, skipping extraction
	I1222 23:50:53.534640  562224 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1222 23:50:53.554147  562224 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1222 23:50:53.554181  562224 cache_images.go:86] Images are preloaded, skipping loading
	I1222 23:50:53.554193  562224 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0-rc.1 docker true true} ...
	I1222 23:50:53.554316  562224 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-348344 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-348344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 23:50:53.554381  562224 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1222 23:50:53.604380  562224 cni.go:84] Creating CNI manager for ""
	I1222 23:50:53.604406  562224 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1222 23:50:53.604435  562224 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1222 23:50:53.604463  562224 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-348344 NodeName:newest-cni-348344 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 23:50:53.604616  562224 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-348344"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 23:50:53.604691  562224 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 23:50:53.612556  562224 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 23:50:53.612634  562224 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 23:50:53.620050  562224 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1222 23:50:53.632125  562224 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 23:50:53.643993  562224 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1222 23:50:53.655797  562224 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1222 23:50:53.659212  562224 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 23:50:53.668668  562224 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 23:50:53.751464  562224 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 23:50:53.773933  562224 certs.go:69] Setting up /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344 for IP: 192.168.94.2
	I1222 23:50:53.773955  562224 certs.go:195] generating shared ca certs ...
	I1222 23:50:53.773974  562224 certs.go:227] acquiring lock for ca certs: {Name:mk952cc8302daab7c0050aedd5db4002f6808128 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 23:50:53.774129  562224 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.key
	I1222 23:50:53.774167  562224 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.key
	I1222 23:50:53.774177  562224 certs.go:257] generating profile certs ...
	I1222 23:50:53.774230  562224 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/client.key
	I1222 23:50:53.774250  562224 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/client.crt with IP's: []
	I1222 23:50:53.956577  562224 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/client.crt ...
	I1222 23:50:53.956612  562224 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/client.crt: {Name:mk4835605b06c6554a39b180edd9908c8f2f19d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 23:50:53.956772  562224 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/client.key ...
	I1222 23:50:53.956785  562224 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/client.key: {Name:mk0b9b85c7f285411dd8a56fb7e6153c90cfbf16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 23:50:53.956866  562224 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/apiserver.key.3654ac73
	I1222 23:50:53.956886  562224 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/apiserver.crt.3654ac73 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1222 23:50:53.976995  562224 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/apiserver.crt.3654ac73 ...
	I1222 23:50:53.977017  562224 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/apiserver.crt.3654ac73: {Name:mk42f8c0cfdba97b6df4279ddc48a8d5bdc1bfd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 23:50:53.977139  562224 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/apiserver.key.3654ac73 ...
	I1222 23:50:53.977150  562224 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/apiserver.key.3654ac73: {Name:mk6daf0b8fa9990e76f8d0744e2a9c6339ee03e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 23:50:53.977219  562224 certs.go:382] copying /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/apiserver.crt.3654ac73 -> /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/apiserver.crt
	I1222 23:50:53.977292  562224 certs.go:386] copying /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/apiserver.key.3654ac73 -> /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/apiserver.key
	I1222 23:50:53.977346  562224 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/proxy-client.key
	I1222 23:50:53.977360  562224 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/proxy-client.crt with IP's: []
	I1222 23:50:54.025683  562224 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/proxy-client.crt ...
	I1222 23:50:54.025717  562224 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/proxy-client.crt: {Name:mk323d55679d70387579e022dd7dcbd3cb1e1891 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 23:50:54.025910  562224 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/proxy-client.key ...
	I1222 23:50:54.025930  562224 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/proxy-client.key: {Name:mkca6976c30d7744a7f01c0d81b598d5d15c18ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 23:50:54.026135  562224 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803.pem (1338 bytes)
	W1222 23:50:54.026180  562224 certs.go:480] ignoring /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803_empty.pem, impossibly tiny 0 bytes
	I1222 23:50:54.026192  562224 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem (1675 bytes)
	I1222 23:50:54.026234  562224 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem (1082 bytes)
	I1222 23:50:54.026262  562224 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem (1123 bytes)
	I1222 23:50:54.026289  562224 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem (1679 bytes)
	I1222 23:50:54.026343  562224 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem (1708 bytes)
	I1222 23:50:54.027031  562224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 23:50:54.047063  562224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 23:50:54.064837  562224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 23:50:54.082398  562224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 23:50:54.099138  562224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 23:50:54.115701  562224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 23:50:54.132218  562224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 23:50:54.148837  562224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1222 23:50:54.165127  562224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 23:50:54.184125  562224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803.pem --> /usr/share/ca-certificates/75803.pem (1338 bytes)
	I1222 23:50:54.200654  562224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem --> /usr/share/ca-certificates/758032.pem (1708 bytes)
	I1222 23:50:54.216922  562224 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 23:50:54.228547  562224 ssh_runner.go:195] Run: openssl version
	I1222 23:50:54.234386  562224 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/758032.pem
	I1222 23:50:54.241128  562224 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/758032.pem /etc/ssl/certs/758032.pem
	I1222 23:50:54.248017  562224 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/758032.pem
	I1222 23:50:54.251393  562224 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 22:42 /usr/share/ca-certificates/758032.pem
	I1222 23:50:54.251456  562224 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/758032.pem
	I1222 23:50:54.286115  562224 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 23:50:54.293732  562224 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/758032.pem /etc/ssl/certs/3ec20f2e.0
	I1222 23:50:54.301015  562224 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 23:50:54.307875  562224 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 23:50:54.314852  562224 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 23:50:54.318336  562224 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 22:33 /usr/share/ca-certificates/minikubeCA.pem
	I1222 23:50:54.318376  562224 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 23:50:54.354209  562224 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 23:50:54.362074  562224 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1222 23:50:54.369000  562224 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/75803.pem
	I1222 23:50:54.375859  562224 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/75803.pem /etc/ssl/certs/75803.pem
	I1222 23:50:54.382826  562224 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75803.pem
	I1222 23:50:54.386475  562224 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 22:42 /usr/share/ca-certificates/75803.pem
	I1222 23:50:54.386530  562224 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75803.pem
	I1222 23:50:54.420279  562224 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 23:50:54.427197  562224 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/75803.pem /etc/ssl/certs/51391683.0
	I1222 23:50:54.433923  562224 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 23:50:54.437345  562224 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1222 23:50:54.437402  562224 kubeadm.go:401] StartCluster: {Name:newest-cni-348344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-348344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 23:50:54.437515  562224 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1222 23:50:54.457582  562224 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 23:50:54.465506  562224 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 23:50:54.473256  562224 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 23:50:54.473298  562224 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 23:50:54.480481  562224 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 23:50:54.480494  562224 kubeadm.go:158] found existing configuration files:
	
	I1222 23:50:54.480534  562224 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1222 23:50:54.487698  562224 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 23:50:54.487736  562224 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 23:50:54.494726  562224 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1222 23:50:54.502000  562224 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 23:50:54.502045  562224 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 23:50:54.509213  562224 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1222 23:50:54.517736  562224 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 23:50:54.517785  562224 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 23:50:54.525852  562224 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1222 23:50:54.533906  562224 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 23:50:54.533957  562224 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 23:50:54.541086  562224 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 23:50:54.576983  562224 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 23:50:54.577078  562224 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 23:50:54.642688  562224 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 23:50:54.642780  562224 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1222 23:50:54.642827  562224 kubeadm.go:319] OS: Linux
	I1222 23:50:54.642891  562224 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 23:50:54.642940  562224 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 23:50:54.643001  562224 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 23:50:54.643046  562224 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 23:50:54.643087  562224 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 23:50:54.643166  562224 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 23:50:54.643241  562224 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 23:50:54.643309  562224 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 23:50:54.643389  562224 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 23:50:54.698919  562224 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 23:50:54.699078  562224 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 23:50:54.699242  562224 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 23:50:54.711763  562224 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 23:50:54.715505  562224 out.go:252]   - Generating certificates and keys ...
	I1222 23:50:54.715639  562224 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 23:50:54.715750  562224 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 23:50:54.774369  562224 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1222 23:50:54.817835  562224 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1222 23:50:54.976570  562224 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1222 23:50:55.123352  562224 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1222 23:50:55.154478  562224 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1222 23:50:55.154643  562224 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-348344] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1222 23:50:55.258080  562224 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1222 23:50:55.258255  562224 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-348344] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1222 23:50:55.345640  562224 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1222 23:50:55.370252  562224 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1222 23:50:55.442169  562224 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1222 23:50:55.442252  562224 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 23:50:55.602217  562224 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 23:50:55.652356  562224 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 23:50:55.715936  562224 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 23:50:55.792965  562224 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 23:50:55.847114  562224 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 23:50:55.847722  562224 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 23:50:55.851406  562224 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 23:50:55.852803  562224 out.go:252]   - Booting up control plane ...
	I1222 23:50:55.852882  562224 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 23:50:55.852959  562224 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 23:50:55.854688  562224 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 23:50:55.868952  562224 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 23:50:55.869090  562224 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 23:50:55.875580  562224 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 23:50:55.875940  562224 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 23:50:55.875981  562224 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 23:50:55.979738  562224 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 23:50:55.979924  562224 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 23:54:55.980946  562224 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000743938s
	I1222 23:54:55.980990  562224 kubeadm.go:319] 
	I1222 23:54:55.981208  562224 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 23:54:55.981310  562224 kubeadm.go:319] 	- The kubelet is not running
	I1222 23:54:55.981530  562224 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 23:54:55.981553  562224 kubeadm.go:319] 
	I1222 23:54:55.981865  562224 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 23:54:55.981959  562224 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 23:54:55.982030  562224 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 23:54:55.982046  562224 kubeadm.go:319] 
	I1222 23:54:55.986168  562224 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1222 23:54:55.987029  562224 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 23:54:55.987194  562224 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 23:54:55.987572  562224 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1222 23:54:55.987616  562224 kubeadm.go:319] 
	I1222 23:54:55.987701  562224 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1222 23:54:55.987869  562224 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-348344] and IPs [192.168.94.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-348344] and IPs [192.168.94.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000743938s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-348344] and IPs [192.168.94.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-348344] and IPs [192.168.94.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000743938s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1222 23:54:55.987968  562224 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1222 23:54:56.414677  562224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 23:54:56.427588  562224 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 23:54:56.427673  562224 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 23:54:56.435453  562224 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 23:54:56.435472  562224 kubeadm.go:158] found existing configuration files:
	
	I1222 23:54:56.435519  562224 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1222 23:54:56.442918  562224 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 23:54:56.442964  562224 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 23:54:56.450121  562224 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1222 23:54:56.457496  562224 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 23:54:56.457546  562224 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 23:54:56.464623  562224 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1222 23:54:56.471836  562224 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 23:54:56.471894  562224 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 23:54:56.479007  562224 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1222 23:54:56.486295  562224 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 23:54:56.486343  562224 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 23:54:56.493909  562224 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 23:54:56.606004  562224 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1222 23:54:56.606616  562224 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 23:54:56.665845  562224 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 23:58:57.312274  562224 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1222 23:58:57.312332  562224 kubeadm.go:319] 
	I1222 23:58:57.312497  562224 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1222 23:58:57.315279  562224 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 23:58:57.315347  562224 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 23:58:57.315483  562224 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 23:58:57.315622  562224 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1222 23:58:57.315693  562224 kubeadm.go:319] OS: Linux
	I1222 23:58:57.315763  562224 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 23:58:57.315853  562224 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 23:58:57.315915  562224 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 23:58:57.315958  562224 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 23:58:57.315999  562224 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 23:58:57.316042  562224 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 23:58:57.316085  562224 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 23:58:57.316126  562224 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 23:58:57.316170  562224 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 23:58:57.316231  562224 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 23:58:57.316326  562224 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 23:58:57.316449  562224 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 23:58:57.316537  562224 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 23:58:57.318228  562224 out.go:252]   - Generating certificates and keys ...
	I1222 23:58:57.318310  562224 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 23:58:57.318384  562224 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 23:58:57.318477  562224 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1222 23:58:57.318563  562224 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1222 23:58:57.318690  562224 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1222 23:58:57.318778  562224 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1222 23:58:57.318890  562224 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1222 23:58:57.318987  562224 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1222 23:58:57.319107  562224 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1222 23:58:57.319222  562224 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1222 23:58:57.319284  562224 kubeadm.go:319] [certs] Using the existing "sa" key
	I1222 23:58:57.319374  562224 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 23:58:57.319447  562224 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 23:58:57.319523  562224 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 23:58:57.319627  562224 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 23:58:57.319729  562224 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 23:58:57.319812  562224 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 23:58:57.319926  562224 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 23:58:57.320025  562224 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 23:58:57.321146  562224 out.go:252]   - Booting up control plane ...
	I1222 23:58:57.321235  562224 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 23:58:57.321316  562224 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 23:58:57.321399  562224 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 23:58:57.321509  562224 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 23:58:57.321664  562224 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 23:58:57.321790  562224 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 23:58:57.321902  562224 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 23:58:57.321941  562224 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 23:58:57.322085  562224 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 23:58:57.322210  562224 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 23:58:57.322311  562224 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000754156s
	I1222 23:58:57.322328  562224 kubeadm.go:319] 
	I1222 23:58:57.322413  562224 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 23:58:57.322447  562224 kubeadm.go:319] 	- The kubelet is not running
	I1222 23:58:57.322535  562224 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 23:58:57.322541  562224 kubeadm.go:319] 
	I1222 23:58:57.322698  562224 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 23:58:57.322748  562224 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 23:58:57.322800  562224 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 23:58:57.322836  562224 kubeadm.go:319] 
	I1222 23:58:57.322879  562224 kubeadm.go:403] duration metric: took 8m2.88547875s to StartCluster
	I1222 23:58:57.322927  562224 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 23:58:57.322990  562224 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 23:58:57.367838  562224 cri.go:96] found id: ""
	I1222 23:58:57.367870  562224 logs.go:282] 0 containers: []
	W1222 23:58:57.367878  562224 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:58:57.367885  562224 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 23:58:57.367931  562224 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 23:58:57.397908  562224 cri.go:96] found id: ""
	I1222 23:58:57.397941  562224 logs.go:282] 0 containers: []
	W1222 23:58:57.397950  562224 logs.go:284] No container was found matching "etcd"
	I1222 23:58:57.397959  562224 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 23:58:57.398020  562224 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 23:58:57.423495  562224 cri.go:96] found id: ""
	I1222 23:58:57.423516  562224 logs.go:282] 0 containers: []
	W1222 23:58:57.423525  562224 logs.go:284] No container was found matching "coredns"
	I1222 23:58:57.423531  562224 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 23:58:57.423588  562224 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 23:58:57.454115  562224 cri.go:96] found id: ""
	I1222 23:58:57.454141  562224 logs.go:282] 0 containers: []
	W1222 23:58:57.454152  562224 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:58:57.454161  562224 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 23:58:57.454218  562224 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 23:58:57.487745  562224 cri.go:96] found id: ""
	I1222 23:58:57.487776  562224 logs.go:282] 0 containers: []
	W1222 23:58:57.487787  562224 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:58:57.487796  562224 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 23:58:57.487865  562224 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 23:58:57.515544  562224 cri.go:96] found id: ""
	I1222 23:58:57.515566  562224 logs.go:282] 0 containers: []
	W1222 23:58:57.515573  562224 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:58:57.515580  562224 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 23:58:57.515649  562224 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 23:58:57.547873  562224 cri.go:96] found id: ""
	I1222 23:58:57.547901  562224 logs.go:282] 0 containers: []
	W1222 23:58:57.547912  562224 logs.go:284] No container was found matching "kindnet"
	I1222 23:58:57.547923  562224 logs.go:123] Gathering logs for kubelet ...
	I1222 23:58:57.547937  562224 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:58:57.606575  562224 logs.go:123] Gathering logs for dmesg ...
	I1222 23:58:57.606613  562224 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:58:57.626480  562224 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:58:57.626506  562224 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:58:57.702799  562224 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:58:57.694188    9060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:58:57.695818    9060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:58:57.696498    9060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:58:57.698052    9060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:58:57.698488    9060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:58:57.694188    9060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:58:57.695818    9060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:58:57.696498    9060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:58:57.698052    9060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:58:57.698488    9060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:58:57.702818  562224 logs.go:123] Gathering logs for Docker ...
	I1222 23:58:57.702831  562224 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:58:57.723963  562224 logs.go:123] Gathering logs for container status ...
	I1222 23:58:57.723991  562224 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 23:58:57.758791  562224 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000754156s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1222 23:58:57.758859  562224 out.go:285] * 
	* 
	W1222 23:58:57.758931  562224 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000754156s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000754156s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 23:58:57.758949  562224 out.go:285] * 
	* 
	W1222 23:58:57.759255  562224 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 23:58:57.763690  562224 out.go:203] 
	W1222 23:58:57.765170  562224 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000754156s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000754156s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 23:58:57.765230  562224 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1222 23:58:57.765264  562224 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1222 23:58:57.767215  562224 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p newest-cni-348344 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-rc.1": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-348344
helpers_test.go:244: (dbg) docker inspect newest-cni-348344:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "133dc19d84d424ed179e624a54285c88a37ad637a1692732b3536ec0f181551b",
	        "Created": "2025-12-22T23:50:45.124975619Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 562959,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T23:50:45.155175204Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9a87e850a5e640dd3e5f71477885272b970ba271e3722be8bebbe0157f704ffd",
	        "ResolvConfPath": "/var/lib/docker/containers/133dc19d84d424ed179e624a54285c88a37ad637a1692732b3536ec0f181551b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/133dc19d84d424ed179e624a54285c88a37ad637a1692732b3536ec0f181551b/hostname",
	        "HostsPath": "/var/lib/docker/containers/133dc19d84d424ed179e624a54285c88a37ad637a1692732b3536ec0f181551b/hosts",
	        "LogPath": "/var/lib/docker/containers/133dc19d84d424ed179e624a54285c88a37ad637a1692732b3536ec0f181551b/133dc19d84d424ed179e624a54285c88a37ad637a1692732b3536ec0f181551b-json.log",
	        "Name": "/newest-cni-348344",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-348344:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-348344",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "133dc19d84d424ed179e624a54285c88a37ad637a1692732b3536ec0f181551b",
	                "LowerDir": "/var/lib/docker/overlay2/6020e8f517a187af8c88e3692b2c53fcf5fcbaeb46fc7b99af192b869c28d41a-init/diff:/var/lib/docker/overlay2/c57dd1a41102d99c4ed6be3c60b871435428bd2cea6a3d8d172f0a67527ba009/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6020e8f517a187af8c88e3692b2c53fcf5fcbaeb46fc7b99af192b869c28d41a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6020e8f517a187af8c88e3692b2c53fcf5fcbaeb46fc7b99af192b869c28d41a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6020e8f517a187af8c88e3692b2c53fcf5fcbaeb46fc7b99af192b869c28d41a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-348344",
	                "Source": "/var/lib/docker/volumes/newest-cni-348344/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-348344",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-348344",
	                "name.minikube.sigs.k8s.io": "newest-cni-348344",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "c751392a43e7100580d0d30ecd5964e4f8e21563f623abfc3d9bd467dea01c55",
	            "SandboxKey": "/var/run/docker/netns/c751392a43e7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-348344": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1020bfe2df349af00e9e2f4197eff27d709a25503c20a26c662019055cba21bb",
	                    "EndpointID": "a82a4e068e03b8467a5f670e0962c28ad41d4e0a0ed6287f98c7bde60083d8db",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "1a:55:fe:37:1b:97",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-348344",
	                        "133dc19d84d4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-348344 -n newest-cni-348344
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-348344 -n newest-cni-348344: exit status 6 (344.675526ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1222 23:58:58.179405  665154 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-348344" does not appear in /home/jenkins/minikube-integration/22301-72233/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-348344 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/FirstStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                    ARGS                                                                    │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p enable-default-cni-003676 sudo cat /lib/systemd/system/containerd.service                                                               │ enable-default-cni-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:58 UTC │ 22 Dec 25 23:58 UTC │
	│ ssh     │ -p enable-default-cni-003676 sudo cat /etc/containerd/config.toml                                                                          │ enable-default-cni-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:58 UTC │ 22 Dec 25 23:58 UTC │
	│ ssh     │ -p false-003676 sudo systemctl status cri-docker --all --full --no-pager                                                                   │ false-003676              │ jenkins │ v1.37.0 │ 22 Dec 25 23:58 UTC │ 22 Dec 25 23:58 UTC │
	│ ssh     │ -p enable-default-cni-003676 sudo containerd config dump                                                                                   │ enable-default-cni-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:58 UTC │ 22 Dec 25 23:58 UTC │
	│ ssh     │ -p false-003676 sudo systemctl cat cri-docker --no-pager                                                                                   │ false-003676              │ jenkins │ v1.37.0 │ 22 Dec 25 23:58 UTC │ 22 Dec 25 23:58 UTC │
	│ ssh     │ -p enable-default-cni-003676 sudo systemctl status crio --all --full --no-pager                                                            │ enable-default-cni-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:58 UTC │                     │
	│ ssh     │ -p false-003676 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                              │ false-003676              │ jenkins │ v1.37.0 │ 22 Dec 25 23:58 UTC │ 22 Dec 25 23:58 UTC │
	│ ssh     │ -p enable-default-cni-003676 sudo systemctl cat crio --no-pager                                                                            │ enable-default-cni-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:58 UTC │                     │
	│ ssh     │ -p false-003676 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                        │ false-003676              │ jenkins │ v1.37.0 │ 22 Dec 25 23:58 UTC │ 22 Dec 25 23:58 UTC │
	│ ssh     │ -p enable-default-cni-003676 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                  │ enable-default-cni-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:58 UTC │ 22 Dec 25 23:58 UTC │
	│ ssh     │ -p false-003676 sudo cri-dockerd --version                                                                                                 │ false-003676              │ jenkins │ v1.37.0 │ 22 Dec 25 23:58 UTC │ 22 Dec 25 23:58 UTC │
	│ ssh     │ -p enable-default-cni-003676 sudo crio config                                                                                              │ enable-default-cni-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:58 UTC │ 22 Dec 25 23:58 UTC │
	│ ssh     │ -p false-003676 sudo systemctl status containerd --all --full --no-pager                                                                   │ false-003676              │ jenkins │ v1.37.0 │ 22 Dec 25 23:58 UTC │ 22 Dec 25 23:58 UTC │
	│ ssh     │ -p false-003676 sudo systemctl cat containerd --no-pager                                                                                   │ false-003676              │ jenkins │ v1.37.0 │ 22 Dec 25 23:58 UTC │ 22 Dec 25 23:58 UTC │
	│ delete  │ -p enable-default-cni-003676                                                                                                               │ enable-default-cni-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:58 UTC │ 22 Dec 25 23:58 UTC │
	│ ssh     │ -p false-003676 sudo cat /lib/systemd/system/containerd.service                                                                            │ false-003676              │ jenkins │ v1.37.0 │ 22 Dec 25 23:58 UTC │ 22 Dec 25 23:58 UTC │
	│ ssh     │ -p false-003676 sudo cat /etc/containerd/config.toml                                                                                       │ false-003676              │ jenkins │ v1.37.0 │ 22 Dec 25 23:58 UTC │ 22 Dec 25 23:58 UTC │
	│ ssh     │ -p false-003676 sudo containerd config dump                                                                                                │ false-003676              │ jenkins │ v1.37.0 │ 22 Dec 25 23:58 UTC │ 22 Dec 25 23:58 UTC │
	│ ssh     │ -p false-003676 sudo systemctl status crio --all --full --no-pager                                                                         │ false-003676              │ jenkins │ v1.37.0 │ 22 Dec 25 23:58 UTC │                     │
	│ ssh     │ -p false-003676 sudo systemctl cat crio --no-pager                                                                                         │ false-003676              │ jenkins │ v1.37.0 │ 22 Dec 25 23:58 UTC │ 22 Dec 25 23:58 UTC │
	│ ssh     │ -p false-003676 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                               │ false-003676              │ jenkins │ v1.37.0 │ 22 Dec 25 23:58 UTC │ 22 Dec 25 23:58 UTC │
	│ ssh     │ -p false-003676 sudo crio config                                                                                                           │ false-003676              │ jenkins │ v1.37.0 │ 22 Dec 25 23:58 UTC │ 22 Dec 25 23:58 UTC │
	│ start   │ -p flannel-003676 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker │ flannel-003676            │ jenkins │ v1.37.0 │ 22 Dec 25 23:58 UTC │                     │
	│ delete  │ -p false-003676                                                                                                                            │ false-003676              │ jenkins │ v1.37.0 │ 22 Dec 25 23:58 UTC │ 22 Dec 25 23:58 UTC │
	│ start   │ -p bridge-003676 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker   │ bridge-003676             │ jenkins │ v1.37.0 │ 22 Dec 25 23:58 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 23:58:40
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 23:58:40.844061  659414 out.go:360] Setting OutFile to fd 1 ...
	I1222 23:58:40.844338  659414 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 23:58:40.844362  659414 out.go:374] Setting ErrFile to fd 2...
	I1222 23:58:40.844370  659414 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 23:58:40.844662  659414 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-72233/.minikube/bin
	I1222 23:58:40.845188  659414 out.go:368] Setting JSON to false
	I1222 23:58:40.846309  659414 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":13261,"bootTime":1766434660,"procs":281,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1222 23:58:40.846405  659414 start.go:143] virtualization: kvm guest
	I1222 23:58:40.847428  659414 out.go:179] * [bridge-003676] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1222 23:58:40.848860  659414 out.go:179]   - MINIKUBE_LOCATION=22301
	I1222 23:58:40.848855  659414 notify.go:221] Checking for updates...
	I1222 23:58:40.850187  659414 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 23:58:40.851287  659414 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1222 23:58:40.852604  659414 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-72233/.minikube
	I1222 23:58:40.854671  659414 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1222 23:58:40.858339  659414 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 23:58:40.861044  659414 config.go:182] Loaded profile config "flannel-003676": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1222 23:58:40.861150  659414 config.go:182] Loaded profile config "newest-cni-348344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1222 23:58:40.861239  659414 config.go:182] Loaded profile config "no-preload-063943": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1222 23:58:40.861330  659414 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 23:58:40.887290  659414 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1222 23:58:40.887376  659414 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 23:58:40.946796  659414 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:true NGoroutines:76 SystemTime:2025-12-22 23:58:40.935786176 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1222 23:58:40.946912  659414 docker.go:319] overlay module found
	I1222 23:58:40.950739  659414 out.go:179] * Using the docker driver based on user configuration
	I1222 23:58:40.951926  659414 start.go:309] selected driver: docker
	I1222 23:58:40.951946  659414 start.go:928] validating driver "docker" against <nil>
	I1222 23:58:40.951962  659414 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 23:58:40.953185  659414 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 23:58:41.015841  659414 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:true NGoroutines:76 SystemTime:2025-12-22 23:58:41.005448691 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1222 23:58:41.016075  659414 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1222 23:58:41.016406  659414 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1222 23:58:41.018305  659414 out.go:179] * Using Docker driver with root privileges
	I1222 23:58:41.019341  659414 cni.go:84] Creating CNI manager for "bridge"
	I1222 23:58:41.019362  659414 start_flags.go:342] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1222 23:58:41.019479  659414 start.go:353] cluster config:
	{Name:bridge-003676 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:bridge-003676 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 23:58:41.020648  659414 out.go:179] * Starting "bridge-003676" primary control-plane node in "bridge-003676" cluster
	I1222 23:58:41.021714  659414 cache.go:134] Beginning downloading kic base image for docker with docker
	I1222 23:58:41.022820  659414 out.go:179] * Pulling base image v0.0.48-1766394456-22288 ...
	I1222 23:58:41.023877  659414 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime docker
	I1222 23:58:41.023910  659414 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4
	I1222 23:58:41.023930  659414 cache.go:65] Caching tarball of preloaded images
	I1222 23:58:41.024003  659414 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local docker daemon
	I1222 23:58:41.024049  659414 preload.go:251] Found /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1222 23:58:41.024065  659414 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on docker
	I1222 23:58:41.024175  659414 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/bridge-003676/config.json ...
	I1222 23:58:41.024204  659414 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/bridge-003676/config.json: {Name:mkc8c8cc66a0b2c407cf560042ea4def76af4e12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 23:58:41.048366  659414 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local docker daemon, skipping pull
	I1222 23:58:41.048384  659414 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 exists in daemon, skipping load
	I1222 23:58:41.048400  659414 cache.go:243] Successfully downloaded all kic artifacts
	I1222 23:58:41.048435  659414 start.go:360] acquireMachinesLock for bridge-003676: {Name:mka5ffcef69ae6d541e22ba9e3a2f0534fb91948 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 23:58:41.048528  659414 start.go:364] duration metric: took 76.616µs to acquireMachinesLock for "bridge-003676"
	I1222 23:58:41.048550  659414 start.go:93] Provisioning new machine with config: &{Name:bridge-003676 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:bridge-003676 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1222 23:58:41.048686  659414 start.go:125] createHost starting for "" (driver="docker")
	I1222 23:58:36.891360  657435 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1222 23:58:36.891690  657435 start.go:159] libmachine.API.Create for "flannel-003676" (driver="docker")
	I1222 23:58:36.891743  657435 client.go:173] LocalClient.Create starting
	I1222 23:58:36.891855  657435 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem
	I1222 23:58:36.891897  657435 main.go:144] libmachine: Decoding PEM data...
	I1222 23:58:36.891924  657435 main.go:144] libmachine: Parsing certificate...
	I1222 23:58:36.891998  657435 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem
	I1222 23:58:36.892028  657435 main.go:144] libmachine: Decoding PEM data...
	I1222 23:58:36.892047  657435 main.go:144] libmachine: Parsing certificate...
	I1222 23:58:36.892500  657435 cli_runner.go:164] Run: docker network inspect flannel-003676 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1222 23:58:36.909362  657435 cli_runner.go:211] docker network inspect flannel-003676 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1222 23:58:36.909434  657435 network_create.go:284] running [docker network inspect flannel-003676] to gather additional debugging logs...
	I1222 23:58:36.909452  657435 cli_runner.go:164] Run: docker network inspect flannel-003676
	W1222 23:58:36.928053  657435 cli_runner.go:211] docker network inspect flannel-003676 returned with exit code 1
	I1222 23:58:36.928090  657435 network_create.go:287] error running [docker network inspect flannel-003676]: docker network inspect flannel-003676: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network flannel-003676 not found
	I1222 23:58:36.928102  657435 network_create.go:289] output of [docker network inspect flannel-003676]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network flannel-003676 not found
	
	** /stderr **
	I1222 23:58:36.928218  657435 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 23:58:36.947175  657435 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6d900dc18f14 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3e:30:89:aa:a7:2c} reservation:<nil>}
	I1222 23:58:36.948050  657435 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-52673d7f67eb IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:86:59:44:a0:0a:fb} reservation:<nil>}
	I1222 23:58:36.949203  657435 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f98da515e43c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f6:95:d8:50:7b:ba} reservation:<nil>}
	I1222 23:58:36.950155  657435 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-ef3f426059e0 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:46:20:db:77:a0:e0} reservation:<nil>}
	I1222 23:58:36.951470  657435 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f96550}
	I1222 23:58:36.951507  657435 network_create.go:124] attempt to create docker network flannel-003676 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1222 23:58:36.951576  657435 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=flannel-003676 flannel-003676
	I1222 23:58:37.009818  657435 network_create.go:108] docker network flannel-003676 192.168.85.0/24 created
	I1222 23:58:37.009843  657435 kic.go:121] calculated static IP "192.168.85.2" for the "flannel-003676" container
	I1222 23:58:37.009903  657435 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1222 23:58:37.031462  657435 cli_runner.go:164] Run: docker volume create flannel-003676 --label name.minikube.sigs.k8s.io=flannel-003676 --label created_by.minikube.sigs.k8s.io=true
	I1222 23:58:37.053302  657435 oci.go:103] Successfully created a docker volume flannel-003676
	I1222 23:58:37.053404  657435 cli_runner.go:164] Run: docker run --rm --name flannel-003676-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=flannel-003676 --entrypoint /usr/bin/test -v flannel-003676:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 -d /var/lib
	I1222 23:58:37.504029  657435 oci.go:107] Successfully prepared a docker volume flannel-003676
	I1222 23:58:37.504088  657435 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime docker
	I1222 23:58:37.504098  657435 kic.go:194] Starting extracting preloaded images to volume ...
	I1222 23:58:37.504164  657435 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v flannel-003676:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 -I lz4 -xf /preloaded.tar -C /extractDir
	I1222 23:58:39.873269  657435 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v flannel-003676:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 -I lz4 -xf /preloaded.tar -C /extractDir: (2.369047133s)
	I1222 23:58:39.873301  657435 kic.go:203] duration metric: took 2.369200041s to extract preloaded images to volume ...
	W1222 23:58:39.873410  657435 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1222 23:58:39.873499  657435 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1222 23:58:39.935372  657435 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname flannel-003676 --name flannel-003676 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=flannel-003676 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=flannel-003676 --network flannel-003676 --ip 192.168.85.2 --volume flannel-003676:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484
	I1222 23:58:40.275907  657435 cli_runner.go:164] Run: docker container inspect flannel-003676 --format={{.State.Running}}
	I1222 23:58:40.296955  657435 cli_runner.go:164] Run: docker container inspect flannel-003676 --format={{.State.Status}}
	I1222 23:58:40.317970  657435 cli_runner.go:164] Run: docker exec flannel-003676 stat /var/lib/dpkg/alternatives/iptables
	I1222 23:58:40.401464  657435 oci.go:144] the created container "flannel-003676" has a running status.
	I1222 23:58:40.401509  657435 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22301-72233/.minikube/machines/flannel-003676/id_rsa...
	I1222 23:58:40.549983  657435 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22301-72233/.minikube/machines/flannel-003676/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1222 23:58:40.580312  657435 cli_runner.go:164] Run: docker container inspect flannel-003676 --format={{.State.Status}}
	I1222 23:58:40.607891  657435 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1222 23:58:40.607916  657435 kic_runner.go:114] Args: [docker exec --privileged flannel-003676 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1222 23:58:40.660534  657435 cli_runner.go:164] Run: docker container inspect flannel-003676 --format={{.State.Status}}
	I1222 23:58:40.681168  657435 machine.go:94] provisionDockerMachine start ...
	I1222 23:58:40.681250  657435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-003676
	I1222 23:58:40.701431  657435 main.go:144] libmachine: Using SSH client type: native
	I1222 23:58:40.701776  657435 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33153 <nil> <nil>}
	I1222 23:58:40.701814  657435 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 23:58:40.702733  657435 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48364->127.0.0.1:33153: read: connection reset by peer
	W1222 23:58:38.498541  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:58:40.997513  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1222 23:58:41.051314  659414 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1222 23:58:41.051519  659414 start.go:159] libmachine.API.Create for "bridge-003676" (driver="docker")
	I1222 23:58:41.051554  659414 client.go:173] LocalClient.Create starting
	I1222 23:58:41.051689  659414 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem
	I1222 23:58:41.051728  659414 main.go:144] libmachine: Decoding PEM data...
	I1222 23:58:41.051753  659414 main.go:144] libmachine: Parsing certificate...
	I1222 23:58:41.051815  659414 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem
	I1222 23:58:41.051833  659414 main.go:144] libmachine: Decoding PEM data...
	I1222 23:58:41.051854  659414 main.go:144] libmachine: Parsing certificate...
	I1222 23:58:41.052168  659414 cli_runner.go:164] Run: docker network inspect bridge-003676 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1222 23:58:41.068991  659414 cli_runner.go:211] docker network inspect bridge-003676 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1222 23:58:41.069083  659414 network_create.go:284] running [docker network inspect bridge-003676] to gather additional debugging logs...
	I1222 23:58:41.069111  659414 cli_runner.go:164] Run: docker network inspect bridge-003676
	W1222 23:58:41.086110  659414 cli_runner.go:211] docker network inspect bridge-003676 returned with exit code 1
	I1222 23:58:41.086158  659414 network_create.go:287] error running [docker network inspect bridge-003676]: docker network inspect bridge-003676: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network bridge-003676 not found
	I1222 23:58:41.086175  659414 network_create.go:289] output of [docker network inspect bridge-003676]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network bridge-003676 not found
	
	** /stderr **
	I1222 23:58:41.086310  659414 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 23:58:41.104218  659414 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6d900dc18f14 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3e:30:89:aa:a7:2c} reservation:<nil>}
	I1222 23:58:41.104822  659414 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-52673d7f67eb IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:86:59:44:a0:0a:fb} reservation:<nil>}
	I1222 23:58:41.105459  659414 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f98da515e43c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f6:95:d8:50:7b:ba} reservation:<nil>}
	I1222 23:58:41.106278  659414 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001eed550}
	I1222 23:58:41.106309  659414 network_create.go:124] attempt to create docker network bridge-003676 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1222 23:58:41.106358  659414 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=bridge-003676 bridge-003676
	I1222 23:58:41.156624  659414 network_create.go:108] docker network bridge-003676 192.168.76.0/24 created
	I1222 23:58:41.156658  659414 kic.go:121] calculated static IP "192.168.76.2" for the "bridge-003676" container
	I1222 23:58:41.156749  659414 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1222 23:58:41.174449  659414 cli_runner.go:164] Run: docker volume create bridge-003676 --label name.minikube.sigs.k8s.io=bridge-003676 --label created_by.minikube.sigs.k8s.io=true
	I1222 23:58:41.192503  659414 oci.go:103] Successfully created a docker volume bridge-003676
	I1222 23:58:41.192618  659414 cli_runner.go:164] Run: docker run --rm --name bridge-003676-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-003676 --entrypoint /usr/bin/test -v bridge-003676:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 -d /var/lib
	I1222 23:58:41.566195  659414 oci.go:107] Successfully prepared a docker volume bridge-003676
	I1222 23:58:41.566263  659414 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime docker
	I1222 23:58:41.566274  659414 kic.go:194] Starting extracting preloaded images to volume ...
	I1222 23:58:41.566337  659414 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v bridge-003676:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 -I lz4 -xf /preloaded.tar -C /extractDir
	I1222 23:58:44.863625  659414 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v bridge-003676:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 -I lz4 -xf /preloaded.tar -C /extractDir: (3.297231519s)
	I1222 23:58:44.863670  659414 kic.go:203] duration metric: took 3.2973911s to extract preloaded images to volume ...
	W1222 23:58:44.863788  659414 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1222 23:58:44.863879  659414 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1222 23:58:44.920411  659414 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname bridge-003676 --name bridge-003676 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-003676 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=bridge-003676 --network bridge-003676 --ip 192.168.76.2 --volume bridge-003676:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484
	I1222 23:58:45.182737  659414 cli_runner.go:164] Run: docker container inspect bridge-003676 --format={{.State.Running}}
	I1222 23:58:45.201295  659414 cli_runner.go:164] Run: docker container inspect bridge-003676 --format={{.State.Status}}
	I1222 23:58:45.221866  659414 cli_runner.go:164] Run: docker exec bridge-003676 stat /var/lib/dpkg/alternatives/iptables
	I1222 23:58:45.265658  659414 oci.go:144] the created container "bridge-003676" has a running status.
	I1222 23:58:45.265703  659414 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22301-72233/.minikube/machines/bridge-003676/id_rsa...
	I1222 23:58:45.412235  659414 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22301-72233/.minikube/machines/bridge-003676/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1222 23:58:45.437447  659414 cli_runner.go:164] Run: docker container inspect bridge-003676 --format={{.State.Status}}
	I1222 23:58:45.456868  659414 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1222 23:58:45.456896  659414 kic_runner.go:114] Args: [docker exec --privileged bridge-003676 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1222 23:58:45.509761  659414 cli_runner.go:164] Run: docker container inspect bridge-003676 --format={{.State.Status}}
	I1222 23:58:45.531333  659414 machine.go:94] provisionDockerMachine start ...
	I1222 23:58:45.531439  659414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-003676
	I1222 23:58:45.551031  659414 main.go:144] libmachine: Using SSH client type: native
	I1222 23:58:45.551402  659414 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1222 23:58:45.551432  659414 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 23:58:45.552139  659414 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37632->127.0.0.1:33158: read: connection reset by peer
	I1222 23:58:43.850735  657435 main.go:144] libmachine: SSH cmd err, output: <nil>: flannel-003676
	
	I1222 23:58:43.850774  657435 ubuntu.go:182] provisioning hostname "flannel-003676"
	I1222 23:58:43.850848  657435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-003676
	I1222 23:58:43.868504  657435 main.go:144] libmachine: Using SSH client type: native
	I1222 23:58:43.868757  657435 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33153 <nil> <nil>}
	I1222 23:58:43.868773  657435 main.go:144] libmachine: About to run SSH command:
	sudo hostname flannel-003676 && echo "flannel-003676" | sudo tee /etc/hostname
	I1222 23:58:44.072161  657435 main.go:144] libmachine: SSH cmd err, output: <nil>: flannel-003676
	
	I1222 23:58:44.072260  657435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-003676
	I1222 23:58:44.089980  657435 main.go:144] libmachine: Using SSH client type: native
	I1222 23:58:44.090208  657435 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33153 <nil> <nil>}
	I1222 23:58:44.090225  657435 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sflannel-003676' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 flannel-003676/g' /etc/hosts;
				else 
					echo '127.0.1.1 flannel-003676' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 23:58:44.232450  657435 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 23:58:44.232478  657435 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22301-72233/.minikube CaCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22301-72233/.minikube}
	I1222 23:58:44.232526  657435 ubuntu.go:190] setting up certificates
	I1222 23:58:44.232537  657435 provision.go:84] configureAuth start
	I1222 23:58:44.232618  657435 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" flannel-003676
	I1222 23:58:44.251002  657435 provision.go:143] copyHostCerts
	I1222 23:58:44.251056  657435 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem, removing ...
	I1222 23:58:44.251068  657435 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem
	I1222 23:58:44.251130  657435 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem (1082 bytes)
	I1222 23:58:44.251216  657435 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem, removing ...
	I1222 23:58:44.251225  657435 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem
	I1222 23:58:44.251249  657435 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem (1123 bytes)
	I1222 23:58:44.251303  657435 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem, removing ...
	I1222 23:58:44.251311  657435 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem
	I1222 23:58:44.251335  657435 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem (1679 bytes)
	I1222 23:58:44.251385  657435 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem org=jenkins.flannel-003676 san=[127.0.0.1 192.168.85.2 flannel-003676 localhost minikube]
	I1222 23:58:44.463293  657435 provision.go:177] copyRemoteCerts
	I1222 23:58:44.463380  657435 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 23:58:44.463427  657435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-003676
	I1222 23:58:44.482432  657435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/flannel-003676/id_rsa Username:docker}
	I1222 23:58:44.585038  657435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1222 23:58:44.785658  657435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 23:58:44.803943  657435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1222 23:58:44.821365  657435 provision.go:87] duration metric: took 588.811872ms to configureAuth
	I1222 23:58:44.821394  657435 ubuntu.go:206] setting minikube options for container-runtime
	I1222 23:58:44.821571  657435 config.go:182] Loaded profile config "flannel-003676": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1222 23:58:44.821648  657435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-003676
	I1222 23:58:44.839395  657435 main.go:144] libmachine: Using SSH client type: native
	I1222 23:58:44.839710  657435 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33153 <nil> <nil>}
	I1222 23:58:44.839727  657435 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1222 23:58:44.990410  657435 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1222 23:58:44.990436  657435 ubuntu.go:71] root file system type: overlay
	I1222 23:58:44.990575  657435 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1222 23:58:44.990662  657435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-003676
	I1222 23:58:45.008438  657435 main.go:144] libmachine: Using SSH client type: native
	I1222 23:58:45.008725  657435 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33153 <nil> <nil>}
	I1222 23:58:45.008826  657435 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1222 23:58:45.172072  657435 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1222 23:58:45.172165  657435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-003676
	I1222 23:58:45.192891  657435 main.go:144] libmachine: Using SSH client type: native
	I1222 23:58:45.193205  657435 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33153 <nil> <nil>}
	I1222 23:58:45.193239  657435 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1222 23:58:46.420760  657435 main.go:144] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-12 14:48:15.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-22 23:58:45.169851809 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1222 23:58:46.420794  657435 machine.go:97] duration metric: took 5.739600353s to provisionDockerMachine
	I1222 23:58:46.420809  657435 client.go:176] duration metric: took 9.52905818s to LocalClient.Create
	I1222 23:58:46.420834  657435 start.go:167] duration metric: took 9.529144748s to libmachine.API.Create "flannel-003676"
	I1222 23:58:46.420847  657435 start.go:293] postStartSetup for "flannel-003676" (driver="docker")
	I1222 23:58:46.420863  657435 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 23:58:46.420927  657435 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 23:58:46.420976  657435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-003676
	I1222 23:58:46.439649  657435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/flannel-003676/id_rsa Username:docker}
	I1222 23:58:46.541352  657435 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 23:58:46.544725  657435 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 23:58:46.544748  657435 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 23:58:46.544758  657435 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-72233/.minikube/addons for local assets ...
	I1222 23:58:46.544801  657435 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-72233/.minikube/files for local assets ...
	I1222 23:58:46.544893  657435 filesync.go:149] local asset: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem -> 758032.pem in /etc/ssl/certs
	I1222 23:58:46.544983  657435 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1222 23:58:46.552278  657435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem --> /etc/ssl/certs/758032.pem (1708 bytes)
	I1222 23:58:46.572051  657435 start.go:296] duration metric: took 151.188605ms for postStartSetup
	I1222 23:58:46.572358  657435 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" flannel-003676
	I1222 23:58:46.590578  657435 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/flannel-003676/config.json ...
	I1222 23:58:46.590846  657435 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 23:58:46.590886  657435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-003676
	I1222 23:58:46.607420  657435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/flannel-003676/id_rsa Username:docker}
	W1222 23:58:42.998148  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:58:45.497681  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1222 23:58:46.704692  657435 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 23:58:46.709341  657435 start.go:128] duration metric: took 9.819787465s to createHost
	I1222 23:58:46.709364  657435 start.go:83] releasing machines lock for "flannel-003676", held for 9.819941859s
	I1222 23:58:46.709438  657435 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" flannel-003676
	I1222 23:58:46.727105  657435 ssh_runner.go:195] Run: cat /version.json
	I1222 23:58:46.727156  657435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-003676
	I1222 23:58:46.727205  657435 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 23:58:46.727273  657435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-003676
	I1222 23:58:46.745345  657435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/flannel-003676/id_rsa Username:docker}
	I1222 23:58:46.745993  657435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/flannel-003676/id_rsa Username:docker}
	I1222 23:58:46.843372  657435 ssh_runner.go:195] Run: systemctl --version
	I1222 23:58:46.899731  657435 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 23:58:46.904464  657435 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 23:58:46.904524  657435 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 23:58:46.928005  657435 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1222 23:58:46.928026  657435 start.go:496] detecting cgroup driver to use...
	I1222 23:58:46.928060  657435 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 23:58:46.928182  657435 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 23:58:46.941673  657435 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1222 23:58:46.950904  657435 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1222 23:58:46.959343  657435 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1222 23:58:46.959386  657435 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1222 23:58:46.967574  657435 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 23:58:46.975720  657435 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1222 23:58:46.983608  657435 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 23:58:46.991628  657435 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 23:58:46.999274  657435 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1222 23:58:47.008087  657435 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1222 23:58:47.016983  657435 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1222 23:58:47.026835  657435 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 23:58:47.034881  657435 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 23:58:47.042352  657435 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 23:58:47.121734  657435 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1222 23:58:47.194064  657435 start.go:496] detecting cgroup driver to use...
	I1222 23:58:47.194117  657435 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 23:58:47.194172  657435 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1222 23:58:47.207899  657435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 23:58:47.219795  657435 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1222 23:58:47.236516  657435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 23:58:47.248409  657435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1222 23:58:47.260412  657435 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 23:58:47.273856  657435 ssh_runner.go:195] Run: which cri-dockerd
	I1222 23:58:47.277455  657435 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1222 23:58:47.286247  657435 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1222 23:58:47.298127  657435 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1222 23:58:47.380814  657435 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1222 23:58:47.460310  657435 docker.go:578] configuring docker to use "cgroupfs" as cgroup driver...
	I1222 23:58:47.460439  657435 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1222 23:58:47.473477  657435 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1222 23:58:47.485639  657435 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 23:58:47.577860  657435 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1222 23:58:48.286534  657435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 23:58:48.298897  657435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1222 23:58:48.310954  657435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1222 23:58:48.323206  657435 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1222 23:58:48.407550  657435 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1222 23:58:48.498496  657435 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 23:58:48.586903  657435 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1222 23:58:48.611904  657435 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1222 23:58:48.623720  657435 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 23:58:48.707150  657435 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1222 23:58:48.780101  657435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1222 23:58:48.793317  657435 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1222 23:58:48.793392  657435 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1222 23:58:48.797347  657435 start.go:564] Will wait 60s for crictl version
	I1222 23:58:48.797394  657435 ssh_runner.go:195] Run: which crictl
	I1222 23:58:48.800876  657435 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 23:58:48.825278  657435 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1222 23:58:48.825346  657435 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1222 23:58:48.852140  657435 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1222 23:58:48.879375  657435 out.go:252] * Preparing Kubernetes v1.34.3 on Docker 29.1.3 ...
	I1222 23:58:48.879482  657435 cli_runner.go:164] Run: docker network inspect flannel-003676 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 23:58:48.897690  657435 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1222 23:58:48.901881  657435 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 23:58:48.911881  657435 kubeadm.go:884] updating cluster {Name:flannel-003676 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:flannel-003676 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1222 23:58:48.912002  657435 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime docker
	I1222 23:58:48.912046  657435 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1222 23:58:48.932698  657435 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.3
	registry.k8s.io/kube-controller-manager:v1.34.3
	registry.k8s.io/kube-scheduler:v1.34.3
	registry.k8s.io/kube-proxy:v1.34.3
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1222 23:58:48.932720  657435 docker.go:624] Images already preloaded, skipping extraction
	I1222 23:58:48.932774  657435 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1222 23:58:48.953067  657435 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.3
	registry.k8s.io/kube-controller-manager:v1.34.3
	registry.k8s.io/kube-scheduler:v1.34.3
	registry.k8s.io/kube-proxy:v1.34.3
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1222 23:58:48.953096  657435 cache_images.go:86] Images are preloaded, skipping loading
	I1222 23:58:48.953109  657435 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.3 docker true true} ...
	I1222 23:58:48.953231  657435 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=flannel-003676 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:flannel-003676 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel}
	I1222 23:58:48.953297  657435 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1222 23:58:49.004988  657435 cni.go:84] Creating CNI manager for "flannel"
	I1222 23:58:49.005022  657435 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 23:58:49.005050  657435 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:flannel-003676 NodeName:flannel-003676 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 23:58:49.005195  657435 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "flannel-003676"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 23:58:49.005283  657435 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1222 23:58:49.014607  657435 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 23:58:49.014677  657435 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 23:58:49.023367  657435 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1222 23:58:49.037634  657435 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1222 23:58:49.050472  657435 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1222 23:58:49.063996  657435 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1222 23:58:49.067513  657435 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 23:58:49.076891  657435 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 23:58:49.161171  657435 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 23:58:49.190181  657435 certs.go:69] Setting up /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/flannel-003676 for IP: 192.168.85.2
	I1222 23:58:49.190197  657435 certs.go:195] generating shared ca certs ...
	I1222 23:58:49.190213  657435 certs.go:227] acquiring lock for ca certs: {Name:mk952cc8302daab7c0050aedd5db4002f6808128 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 23:58:49.190396  657435 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.key
	I1222 23:58:49.190441  657435 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.key
	I1222 23:58:49.190451  657435 certs.go:257] generating profile certs ...
	I1222 23:58:49.190509  657435 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/flannel-003676/client.key
	I1222 23:58:49.190527  657435 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/flannel-003676/client.crt with IP's: []
	I1222 23:58:49.280117  657435 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/flannel-003676/client.crt ...
	I1222 23:58:49.280163  657435 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/flannel-003676/client.crt: {Name:mk93fe43e82e4ed8c555da6ea43ba548fc4643b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 23:58:49.280509  657435 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/flannel-003676/client.key ...
	I1222 23:58:49.280531  657435 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/flannel-003676/client.key: {Name:mk6eebe666705b7619ba42829145b208ce1fcb46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 23:58:49.280759  657435 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/flannel-003676/apiserver.key.022e44f7
	I1222 23:58:49.280786  657435 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/flannel-003676/apiserver.crt.022e44f7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1222 23:58:49.368642  657435 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/flannel-003676/apiserver.crt.022e44f7 ...
	I1222 23:58:49.368671  657435 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/flannel-003676/apiserver.crt.022e44f7: {Name:mk1853a5ce8ef8f0822e900b32fec8f79fe4aa01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 23:58:49.368834  657435 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/flannel-003676/apiserver.key.022e44f7 ...
	I1222 23:58:49.368849  657435 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/flannel-003676/apiserver.key.022e44f7: {Name:mkc846660723990fde091dd511605cb76b78f4e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 23:58:49.368938  657435 certs.go:382] copying /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/flannel-003676/apiserver.crt.022e44f7 -> /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/flannel-003676/apiserver.crt
	I1222 23:58:49.369017  657435 certs.go:386] copying /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/flannel-003676/apiserver.key.022e44f7 -> /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/flannel-003676/apiserver.key
	I1222 23:58:49.369078  657435 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/flannel-003676/proxy-client.key
	I1222 23:58:49.369097  657435 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/flannel-003676/proxy-client.crt with IP's: []
	I1222 23:58:49.511823  657435 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/flannel-003676/proxy-client.crt ...
	I1222 23:58:49.511853  657435 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/flannel-003676/proxy-client.crt: {Name:mk2cb61234d40a0be980081c176af937eb0d2423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 23:58:49.512017  657435 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/flannel-003676/proxy-client.key ...
	I1222 23:58:49.512034  657435 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/flannel-003676/proxy-client.key: {Name:mkce2f7647e82b8ba76eff6fe963ab131976bb8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 23:58:49.512255  657435 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803.pem (1338 bytes)
	W1222 23:58:49.512301  657435 certs.go:480] ignoring /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803_empty.pem, impossibly tiny 0 bytes
	I1222 23:58:49.512316  657435 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem (1675 bytes)
	I1222 23:58:49.512350  657435 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem (1082 bytes)
	I1222 23:58:49.512379  657435 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem (1123 bytes)
	I1222 23:58:49.512410  657435 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem (1679 bytes)
	I1222 23:58:49.512459  657435 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem (1708 bytes)
	I1222 23:58:49.513160  657435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 23:58:49.532992  657435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 23:58:49.550685  657435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 23:58:49.568969  657435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 23:58:49.586058  657435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/flannel-003676/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1222 23:58:49.603143  657435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/flannel-003676/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1222 23:58:49.619791  657435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/flannel-003676/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 23:58:49.636298  657435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/flannel-003676/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1222 23:58:49.652961  657435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 23:58:49.671526  657435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803.pem --> /usr/share/ca-certificates/75803.pem (1338 bytes)
	I1222 23:58:49.688267  657435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem --> /usr/share/ca-certificates/758032.pem (1708 bytes)
	I1222 23:58:49.706947  657435 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 23:58:49.719872  657435 ssh_runner.go:195] Run: openssl version
	I1222 23:58:49.726003  657435 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/75803.pem
	I1222 23:58:49.733303  657435 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/75803.pem /etc/ssl/certs/75803.pem
	I1222 23:58:49.740863  657435 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75803.pem
	I1222 23:58:49.744565  657435 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 22:42 /usr/share/ca-certificates/75803.pem
	I1222 23:58:49.744625  657435 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75803.pem
	I1222 23:58:49.786939  657435 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 23:58:49.794917  657435 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/75803.pem /etc/ssl/certs/51391683.0
	I1222 23:58:49.802288  657435 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/758032.pem
	I1222 23:58:49.809674  657435 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/758032.pem /etc/ssl/certs/758032.pem
	I1222 23:58:49.817362  657435 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/758032.pem
	I1222 23:58:49.820997  657435 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 22:42 /usr/share/ca-certificates/758032.pem
	I1222 23:58:49.821036  657435 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/758032.pem
	I1222 23:58:49.854704  657435 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 23:58:49.862059  657435 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/758032.pem /etc/ssl/certs/3ec20f2e.0
	I1222 23:58:49.869353  657435 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 23:58:49.876660  657435 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 23:58:49.884054  657435 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 23:58:49.887826  657435 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 22:33 /usr/share/ca-certificates/minikubeCA.pem
	I1222 23:58:49.887876  657435 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 23:58:49.936534  657435 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 23:58:49.944234  657435 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1222 23:58:49.951547  657435 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 23:58:49.955125  657435 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1222 23:58:49.955190  657435 kubeadm.go:401] StartCluster: {Name:flannel-003676 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:flannel-003676 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 23:58:49.955364  657435 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1222 23:58:49.975167  657435 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 23:58:49.983225  657435 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 23:58:49.991320  657435 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 23:58:49.991378  657435 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 23:58:50.005074  657435 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 23:58:50.005096  657435 kubeadm.go:158] found existing configuration files:
	
	I1222 23:58:50.005142  657435 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1222 23:58:50.014771  657435 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 23:58:50.014834  657435 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 23:58:50.023222  657435 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1222 23:58:50.031574  657435 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 23:58:50.031650  657435 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 23:58:50.039605  657435 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1222 23:58:50.050032  657435 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 23:58:50.050106  657435 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 23:58:50.058170  657435 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1222 23:58:50.066396  657435 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 23:58:50.066462  657435 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 23:58:50.073864  657435 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 23:58:50.119470  657435 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1222 23:58:50.119553  657435 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 23:58:50.142116  657435 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 23:58:50.142202  657435 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1222 23:58:50.142254  657435 kubeadm.go:319] OS: Linux
	I1222 23:58:50.142351  657435 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 23:58:50.142426  657435 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 23:58:50.142508  657435 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 23:58:50.142606  657435 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 23:58:50.142686  657435 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 23:58:50.142777  657435 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 23:58:50.142847  657435 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 23:58:50.142923  657435 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 23:58:50.142995  657435 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 23:58:50.206997  657435 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 23:58:50.207112  657435 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 23:58:50.207214  657435 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 23:58:50.219259  657435 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 23:58:48.699921  659414 main.go:144] libmachine: SSH cmd err, output: <nil>: bridge-003676
	
	I1222 23:58:48.699948  659414 ubuntu.go:182] provisioning hostname "bridge-003676"
	I1222 23:58:48.700008  659414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-003676
	I1222 23:58:48.719368  659414 main.go:144] libmachine: Using SSH client type: native
	I1222 23:58:48.719628  659414 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1222 23:58:48.719644  659414 main.go:144] libmachine: About to run SSH command:
	sudo hostname bridge-003676 && echo "bridge-003676" | sudo tee /etc/hostname
	I1222 23:58:48.875239  659414 main.go:144] libmachine: SSH cmd err, output: <nil>: bridge-003676
	
	I1222 23:58:48.875331  659414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-003676
	I1222 23:58:48.894332  659414 main.go:144] libmachine: Using SSH client type: native
	I1222 23:58:48.894575  659414 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1222 23:58:48.894619  659414 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-003676' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-003676/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-003676' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 23:58:49.040523  659414 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 23:58:49.040568  659414 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22301-72233/.minikube CaCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22301-72233/.minikube}
	I1222 23:58:49.040613  659414 ubuntu.go:190] setting up certificates
	I1222 23:58:49.040645  659414 provision.go:84] configureAuth start
	I1222 23:58:49.040705  659414 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-003676
	I1222 23:58:49.059182  659414 provision.go:143] copyHostCerts
	I1222 23:58:49.059244  659414 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem, removing ...
	I1222 23:58:49.059259  659414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem
	I1222 23:58:49.059327  659414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem (1082 bytes)
	I1222 23:58:49.059441  659414 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem, removing ...
	I1222 23:58:49.059459  659414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem
	I1222 23:58:49.059505  659414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem (1123 bytes)
	I1222 23:58:49.059584  659414 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem, removing ...
	I1222 23:58:49.059605  659414 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem
	I1222 23:58:49.059647  659414 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem (1679 bytes)
	I1222 23:58:49.059710  659414 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem org=jenkins.bridge-003676 san=[127.0.0.1 192.168.76.2 bridge-003676 localhost minikube]
	I1222 23:58:49.185881  659414 provision.go:177] copyRemoteCerts
	I1222 23:58:49.185952  659414 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 23:58:49.186002  659414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-003676
	I1222 23:58:49.203855  659414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/bridge-003676/id_rsa Username:docker}
	I1222 23:58:49.306060  659414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 23:58:49.325528  659414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1222 23:58:49.342530  659414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1222 23:58:49.359715  659414 provision.go:87] duration metric: took 319.054833ms to configureAuth
	I1222 23:58:49.359742  659414 ubuntu.go:206] setting minikube options for container-runtime
	I1222 23:58:49.359893  659414 config.go:182] Loaded profile config "bridge-003676": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1222 23:58:49.359941  659414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-003676
	I1222 23:58:49.377992  659414 main.go:144] libmachine: Using SSH client type: native
	I1222 23:58:49.378190  659414 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1222 23:58:49.378201  659414 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1222 23:58:49.520191  659414 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1222 23:58:49.520211  659414 ubuntu.go:71] root file system type: overlay
	I1222 23:58:49.520342  659414 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1222 23:58:49.520413  659414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-003676
	I1222 23:58:49.539492  659414 main.go:144] libmachine: Using SSH client type: native
	I1222 23:58:49.539757  659414 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1222 23:58:49.539858  659414 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1222 23:58:49.693788  659414 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1222 23:58:49.693868  659414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-003676
	I1222 23:58:49.712189  659414 main.go:144] libmachine: Using SSH client type: native
	I1222 23:58:49.712454  659414 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1222 23:58:49.712479  659414 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1222 23:58:50.221569  657435 out.go:252]   - Generating certificates and keys ...
	I1222 23:58:50.221721  657435 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 23:58:50.221816  657435 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 23:58:50.503303  657435 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1222 23:58:50.681055  657435 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1222 23:58:50.980793  657435 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1222 23:58:51.009887  657435 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	W1222 23:58:47.997705  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:58:50.000093  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1222 23:58:50.988403  659414 main.go:144] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-12 14:48:15.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-22 23:58:49.692289069 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1222 23:58:50.988431  659414 machine.go:97] duration metric: took 5.457070901s to provisionDockerMachine
	I1222 23:58:50.988445  659414 client.go:176] duration metric: took 9.936882483s to LocalClient.Create
	I1222 23:58:50.988465  659414 start.go:167] duration metric: took 9.936945695s to libmachine.API.Create "bridge-003676"
	I1222 23:58:50.988478  659414 start.go:293] postStartSetup for "bridge-003676" (driver="docker")
	I1222 23:58:50.988490  659414 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 23:58:50.988562  659414 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 23:58:50.988641  659414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-003676
	I1222 23:58:51.006497  659414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/bridge-003676/id_rsa Username:docker}
	I1222 23:58:51.109520  659414 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 23:58:51.113402  659414 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 23:58:51.113435  659414 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 23:58:51.113450  659414 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-72233/.minikube/addons for local assets ...
	I1222 23:58:51.113501  659414 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-72233/.minikube/files for local assets ...
	I1222 23:58:51.113585  659414 filesync.go:149] local asset: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem -> 758032.pem in /etc/ssl/certs
	I1222 23:58:51.113720  659414 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1222 23:58:51.121200  659414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem --> /etc/ssl/certs/758032.pem (1708 bytes)
	I1222 23:58:51.139998  659414 start.go:296] duration metric: took 151.505376ms for postStartSetup
	I1222 23:58:51.140357  659414 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-003676
	I1222 23:58:51.157903  659414 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/bridge-003676/config.json ...
	I1222 23:58:51.158137  659414 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 23:58:51.158180  659414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-003676
	I1222 23:58:51.175507  659414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/bridge-003676/id_rsa Username:docker}
	I1222 23:58:51.274533  659414 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 23:58:51.279583  659414 start.go:128] duration metric: took 10.230878161s to createHost
	I1222 23:58:51.279618  659414 start.go:83] releasing machines lock for "bridge-003676", held for 10.231077544s
	I1222 23:58:51.279690  659414 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-003676
	I1222 23:58:51.299172  659414 ssh_runner.go:195] Run: cat /version.json
	I1222 23:58:51.299232  659414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-003676
	I1222 23:58:51.299285  659414 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 23:58:51.299369  659414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-003676
	I1222 23:58:51.318798  659414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/bridge-003676/id_rsa Username:docker}
	I1222 23:58:51.319390  659414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/bridge-003676/id_rsa Username:docker}
	I1222 23:58:51.416784  659414 ssh_runner.go:195] Run: systemctl --version
	I1222 23:58:51.473755  659414 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 23:58:51.478631  659414 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 23:58:51.478702  659414 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 23:58:51.503771  659414 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1222 23:58:51.503805  659414 start.go:496] detecting cgroup driver to use...
	I1222 23:58:51.503842  659414 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 23:58:51.504003  659414 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 23:58:51.519620  659414 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1222 23:58:51.531559  659414 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1222 23:58:51.541545  659414 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1222 23:58:51.541638  659414 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1222 23:58:51.551966  659414 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 23:58:51.560447  659414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1222 23:58:51.568913  659414 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 23:58:51.577358  659414 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 23:58:51.585065  659414 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1222 23:58:51.593491  659414 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1222 23:58:51.601648  659414 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1222 23:58:51.609967  659414 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 23:58:51.617018  659414 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 23:58:51.623739  659414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 23:58:51.702748  659414 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1222 23:58:51.775957  659414 start.go:496] detecting cgroup driver to use...
	I1222 23:58:51.776005  659414 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 23:58:51.776056  659414 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1222 23:58:51.789903  659414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 23:58:51.801622  659414 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1222 23:58:51.816359  659414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 23:58:51.827965  659414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1222 23:58:51.839874  659414 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 23:58:51.853483  659414 ssh_runner.go:195] Run: which cri-dockerd
	I1222 23:58:51.857135  659414 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1222 23:58:51.865684  659414 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1222 23:58:51.877989  659414 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1222 23:58:51.959773  659414 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1222 23:58:52.050131  659414 docker.go:578] configuring docker to use "cgroupfs" as cgroup driver...
	I1222 23:58:52.050269  659414 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1222 23:58:52.063474  659414 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1222 23:58:52.075479  659414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 23:58:52.160564  659414 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1222 23:58:52.962854  659414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 23:58:52.975569  659414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1222 23:58:52.988030  659414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1222 23:58:53.000407  659414 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1222 23:58:53.098544  659414 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1222 23:58:53.181787  659414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 23:58:53.265740  659414 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1222 23:58:53.289954  659414 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1222 23:58:53.301750  659414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 23:58:53.382621  659414 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1222 23:58:53.460002  659414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1222 23:58:53.473033  659414 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1222 23:58:53.473100  659414 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1222 23:58:53.476858  659414 start.go:564] Will wait 60s for crictl version
	I1222 23:58:53.476914  659414 ssh_runner.go:195] Run: which crictl
	I1222 23:58:53.480251  659414 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 23:58:53.506174  659414 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1222 23:58:53.506250  659414 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1222 23:58:53.533935  659414 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1222 23:58:51.898927  657435 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1222 23:58:51.899140  657435 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [flannel-003676 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1222 23:58:52.129056  657435 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1222 23:58:52.129217  657435 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [flannel-003676 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1222 23:58:52.487760  657435 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1222 23:58:52.918844  657435 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1222 23:58:53.109695  657435 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1222 23:58:53.109936  657435 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 23:58:53.352104  657435 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 23:58:53.771713  657435 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 23:58:53.965964  657435 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 23:58:54.110441  657435 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 23:58:54.425157  657435 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 23:58:54.425771  657435 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 23:58:54.429308  657435 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 23:58:53.563215  659414 out.go:252] * Preparing Kubernetes v1.34.3 on Docker 29.1.3 ...
	I1222 23:58:53.563301  659414 cli_runner.go:164] Run: docker network inspect bridge-003676 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 23:58:53.583196  659414 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1222 23:58:53.587674  659414 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 23:58:53.598239  659414 kubeadm.go:884] updating cluster {Name:bridge-003676 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:bridge-003676 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1222 23:58:53.598452  659414 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime docker
	I1222 23:58:53.598585  659414 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1222 23:58:53.619843  659414 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.3
	registry.k8s.io/kube-scheduler:v1.34.3
	registry.k8s.io/kube-controller-manager:v1.34.3
	registry.k8s.io/kube-proxy:v1.34.3
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1222 23:58:53.619866  659414 docker.go:624] Images already preloaded, skipping extraction
	I1222 23:58:53.619918  659414 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1222 23:58:53.641539  659414 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.3
	registry.k8s.io/kube-controller-manager:v1.34.3
	registry.k8s.io/kube-scheduler:v1.34.3
	registry.k8s.io/kube-proxy:v1.34.3
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1222 23:58:53.641568  659414 cache_images.go:86] Images are preloaded, skipping loading
	I1222 23:58:53.641581  659414 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.3 docker true true} ...
	I1222 23:58:53.641733  659414 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-003676 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:bridge-003676 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I1222 23:58:53.641814  659414 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1222 23:58:53.690825  659414 cni.go:84] Creating CNI manager for "bridge"
	I1222 23:58:53.690857  659414 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 23:58:53.690888  659414 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-003676 NodeName:bridge-003676 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 23:58:53.691068  659414 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "bridge-003676"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 23:58:53.691138  659414 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1222 23:58:53.699269  659414 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 23:58:53.699343  659414 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 23:58:53.706925  659414 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1222 23:58:53.719329  659414 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1222 23:58:53.731524  659414 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1222 23:58:53.744205  659414 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1222 23:58:53.747567  659414 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 23:58:53.757671  659414 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 23:58:53.849199  659414 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 23:58:53.874973  659414 certs.go:69] Setting up /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/bridge-003676 for IP: 192.168.76.2
	I1222 23:58:53.874991  659414 certs.go:195] generating shared ca certs ...
	I1222 23:58:53.875007  659414 certs.go:227] acquiring lock for ca certs: {Name:mk952cc8302daab7c0050aedd5db4002f6808128 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 23:58:53.875182  659414 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.key
	I1222 23:58:53.875243  659414 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.key
	I1222 23:58:53.875252  659414 certs.go:257] generating profile certs ...
	I1222 23:58:53.875328  659414 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/bridge-003676/client.key
	I1222 23:58:53.875343  659414 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/bridge-003676/client.crt with IP's: []
	I1222 23:58:53.895920  659414 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/bridge-003676/client.crt ...
	I1222 23:58:53.895947  659414 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/bridge-003676/client.crt: {Name:mkb526a04bbcfc0d27d3f8a1defeb613aaddf456 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 23:58:53.896151  659414 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/bridge-003676/client.key ...
	I1222 23:58:53.896171  659414 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/bridge-003676/client.key: {Name:mk90e22a50188545ba0762fff6c203a9c2d53a96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 23:58:53.896294  659414 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/bridge-003676/apiserver.key.b7f3ebfd
	I1222 23:58:53.896317  659414 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/bridge-003676/apiserver.crt.b7f3ebfd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1222 23:58:53.968539  659414 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/bridge-003676/apiserver.crt.b7f3ebfd ...
	I1222 23:58:53.968565  659414 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/bridge-003676/apiserver.crt.b7f3ebfd: {Name:mk9033cef7eddbc709c9296a4fc1fb48ce0f5f34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 23:58:53.968737  659414 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/bridge-003676/apiserver.key.b7f3ebfd ...
	I1222 23:58:53.968754  659414 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/bridge-003676/apiserver.key.b7f3ebfd: {Name:mk102e50c7f0d1b6be919fe730346840a388ee78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 23:58:53.968842  659414 certs.go:382] copying /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/bridge-003676/apiserver.crt.b7f3ebfd -> /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/bridge-003676/apiserver.crt
	I1222 23:58:53.968936  659414 certs.go:386] copying /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/bridge-003676/apiserver.key.b7f3ebfd -> /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/bridge-003676/apiserver.key
	I1222 23:58:53.968998  659414 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/bridge-003676/proxy-client.key
	I1222 23:58:53.969014  659414 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/bridge-003676/proxy-client.crt with IP's: []
	I1222 23:58:54.104734  659414 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/bridge-003676/proxy-client.crt ...
	I1222 23:58:54.104766  659414 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/bridge-003676/proxy-client.crt: {Name:mk97555e625c7e3a3557db7830b63bee53964950 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 23:58:54.104931  659414 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/bridge-003676/proxy-client.key ...
	I1222 23:58:54.104943  659414 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/bridge-003676/proxy-client.key: {Name:mk74aed04c772ca36740da42c0c8de36c753cf85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 23:58:54.105123  659414 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803.pem (1338 bytes)
	W1222 23:58:54.105161  659414 certs.go:480] ignoring /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803_empty.pem, impossibly tiny 0 bytes
	I1222 23:58:54.105172  659414 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem (1675 bytes)
	I1222 23:58:54.105196  659414 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem (1082 bytes)
	I1222 23:58:54.105232  659414 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem (1123 bytes)
	I1222 23:58:54.105255  659414 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem (1679 bytes)
	I1222 23:58:54.105295  659414 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem (1708 bytes)
	I1222 23:58:54.105950  659414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 23:58:54.124034  659414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 23:58:54.141030  659414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 23:58:54.157884  659414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 23:58:54.174641  659414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/bridge-003676/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1222 23:58:54.191148  659414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/bridge-003676/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 23:58:54.207676  659414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/bridge-003676/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 23:58:54.224284  659414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/bridge-003676/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1222 23:58:54.241082  659414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803.pem --> /usr/share/ca-certificates/75803.pem (1338 bytes)
	I1222 23:58:54.261440  659414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem --> /usr/share/ca-certificates/758032.pem (1708 bytes)
	I1222 23:58:54.279855  659414 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 23:58:54.298370  659414 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 23:58:54.310510  659414 ssh_runner.go:195] Run: openssl version
	I1222 23:58:54.316405  659414 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 23:58:54.323563  659414 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 23:58:54.330688  659414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 23:58:54.334170  659414 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 22:33 /usr/share/ca-certificates/minikubeCA.pem
	I1222 23:58:54.334219  659414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 23:58:54.367397  659414 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 23:58:54.374712  659414 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1222 23:58:54.381575  659414 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/75803.pem
	I1222 23:58:54.388538  659414 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/75803.pem /etc/ssl/certs/75803.pem
	I1222 23:58:54.395373  659414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75803.pem
	I1222 23:58:54.398737  659414 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 22:42 /usr/share/ca-certificates/75803.pem
	I1222 23:58:54.398787  659414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75803.pem
	I1222 23:58:54.433637  659414 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 23:58:54.441562  659414 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/75803.pem /etc/ssl/certs/51391683.0
	I1222 23:58:54.449105  659414 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/758032.pem
	I1222 23:58:54.457259  659414 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/758032.pem /etc/ssl/certs/758032.pem
	I1222 23:58:54.465280  659414 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/758032.pem
	I1222 23:58:54.469171  659414 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 22:42 /usr/share/ca-certificates/758032.pem
	I1222 23:58:54.469222  659414 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/758032.pem
	I1222 23:58:54.510253  659414 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 23:58:54.518529  659414 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/758032.pem /etc/ssl/certs/3ec20f2e.0
	I1222 23:58:54.526350  659414 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 23:58:54.530477  659414 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1222 23:58:54.530539  659414 kubeadm.go:401] StartCluster: {Name:bridge-003676 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:bridge-003676 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 23:58:54.530687  659414 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1222 23:58:54.550588  659414 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 23:58:54.558694  659414 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 23:58:54.566546  659414 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 23:58:54.566616  659414 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 23:58:54.574218  659414 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 23:58:54.574234  659414 kubeadm.go:158] found existing configuration files:
	
	I1222 23:58:54.574278  659414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1222 23:58:54.581775  659414 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 23:58:54.581828  659414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 23:58:54.588858  659414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1222 23:58:54.597249  659414 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 23:58:54.597302  659414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 23:58:54.604903  659414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1222 23:58:54.612681  659414 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 23:58:54.612738  659414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 23:58:54.620505  659414 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1222 23:58:54.628075  659414 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 23:58:54.628125  659414 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 23:58:54.635154  659414 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 23:58:54.671174  659414 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1222 23:58:54.671954  659414 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 23:58:54.713315  659414 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 23:58:54.713406  659414 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1222 23:58:54.713490  659414 kubeadm.go:319] OS: Linux
	I1222 23:58:54.713558  659414 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 23:58:54.713636  659414 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 23:58:54.713704  659414 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 23:58:54.713760  659414 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 23:58:54.713817  659414 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 23:58:54.714147  659414 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 23:58:54.714216  659414 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 23:58:54.714287  659414 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 23:58:54.714389  659414 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 23:58:54.772670  659414 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 23:58:54.772843  659414 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 23:58:54.772981  659414 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 23:58:54.784737  659414 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 23:58:54.790716  659414 out.go:252]   - Generating certificates and keys ...
	I1222 23:58:54.790805  659414 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 23:58:54.790881  659414 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 23:58:55.143091  659414 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1222 23:58:55.476621  659414 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1222 23:58:55.655462  659414 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1222 23:58:55.832044  659414 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1222 23:58:54.430560  657435 out.go:252]   - Booting up control plane ...
	I1222 23:58:54.430674  657435 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 23:58:54.430773  657435 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 23:58:54.431387  657435 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 23:58:54.461911  657435 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 23:58:54.462053  657435 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 23:58:54.468948  657435 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 23:58:54.469261  657435 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 23:58:54.469325  657435 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 23:58:54.592460  657435 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 23:58:54.592625  657435 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 23:58:55.094193  657435 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.747634ms
	I1222 23:58:55.097173  657435 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1222 23:58:55.097308  657435 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1222 23:58:55.097427  657435 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1222 23:58:55.097528  657435 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1222 23:58:52.498257  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:58:54.502780  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1222 23:58:57.312274  562224 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1222 23:58:57.312332  562224 kubeadm.go:319] 
	I1222 23:58:57.312497  562224 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1222 23:58:57.315279  562224 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1222 23:58:57.315347  562224 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 23:58:57.315483  562224 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 23:58:57.315622  562224 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1222 23:58:57.315693  562224 kubeadm.go:319] OS: Linux
	I1222 23:58:57.315763  562224 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 23:58:57.315853  562224 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 23:58:57.315915  562224 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 23:58:57.315958  562224 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 23:58:57.315999  562224 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 23:58:57.316042  562224 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 23:58:57.316085  562224 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 23:58:57.316126  562224 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 23:58:57.316170  562224 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 23:58:57.316231  562224 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 23:58:57.316326  562224 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 23:58:57.316449  562224 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 23:58:57.316537  562224 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 23:58:57.318228  562224 out.go:252]   - Generating certificates and keys ...
	I1222 23:58:57.318310  562224 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 23:58:57.318384  562224 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 23:58:57.318477  562224 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1222 23:58:57.318563  562224 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1222 23:58:57.318690  562224 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1222 23:58:57.318778  562224 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1222 23:58:57.318890  562224 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1222 23:58:57.318987  562224 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1222 23:58:57.319107  562224 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1222 23:58:57.319222  562224 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1222 23:58:57.319284  562224 kubeadm.go:319] [certs] Using the existing "sa" key
	I1222 23:58:57.319374  562224 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1222 23:58:57.319447  562224 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1222 23:58:57.319523  562224 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1222 23:58:57.319627  562224 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1222 23:58:57.319729  562224 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1222 23:58:57.319812  562224 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1222 23:58:57.319926  562224 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1222 23:58:57.320025  562224 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1222 23:58:57.321146  562224 out.go:252]   - Booting up control plane ...
	I1222 23:58:57.321235  562224 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1222 23:58:57.321316  562224 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1222 23:58:57.321399  562224 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1222 23:58:57.321509  562224 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1222 23:58:57.321664  562224 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1222 23:58:57.321790  562224 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1222 23:58:57.321902  562224 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1222 23:58:57.321941  562224 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1222 23:58:57.322085  562224 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1222 23:58:57.322210  562224 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1222 23:58:57.322311  562224 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000754156s
	I1222 23:58:57.322328  562224 kubeadm.go:319] 
	I1222 23:58:57.322413  562224 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 23:58:57.322447  562224 kubeadm.go:319] 	- The kubelet is not running
	I1222 23:58:57.322535  562224 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 23:58:57.322541  562224 kubeadm.go:319] 
	I1222 23:58:57.322698  562224 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 23:58:57.322748  562224 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 23:58:57.322800  562224 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 23:58:57.322836  562224 kubeadm.go:319] 
	I1222 23:58:57.322879  562224 kubeadm.go:403] duration metric: took 8m2.88547875s to StartCluster
	I1222 23:58:57.322927  562224 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 23:58:57.322990  562224 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 23:58:57.367838  562224 cri.go:96] found id: ""
	I1222 23:58:57.367870  562224 logs.go:282] 0 containers: []
	W1222 23:58:57.367878  562224 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:58:57.367885  562224 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 23:58:57.367931  562224 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 23:58:57.397908  562224 cri.go:96] found id: ""
	I1222 23:58:57.397941  562224 logs.go:282] 0 containers: []
	W1222 23:58:57.397950  562224 logs.go:284] No container was found matching "etcd"
	I1222 23:58:57.397959  562224 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 23:58:57.398020  562224 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 23:58:57.423495  562224 cri.go:96] found id: ""
	I1222 23:58:57.423516  562224 logs.go:282] 0 containers: []
	W1222 23:58:57.423525  562224 logs.go:284] No container was found matching "coredns"
	I1222 23:58:57.423531  562224 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 23:58:57.423588  562224 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 23:58:57.454115  562224 cri.go:96] found id: ""
	I1222 23:58:57.454141  562224 logs.go:282] 0 containers: []
	W1222 23:58:57.454152  562224 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:58:57.454161  562224 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 23:58:57.454218  562224 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 23:58:57.487745  562224 cri.go:96] found id: ""
	I1222 23:58:57.487776  562224 logs.go:282] 0 containers: []
	W1222 23:58:57.487787  562224 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:58:57.487796  562224 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 23:58:57.487865  562224 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 23:58:57.515544  562224 cri.go:96] found id: ""
	I1222 23:58:57.515566  562224 logs.go:282] 0 containers: []
	W1222 23:58:57.515573  562224 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:58:57.515580  562224 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 23:58:57.515649  562224 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 23:58:57.547873  562224 cri.go:96] found id: ""
	I1222 23:58:57.547901  562224 logs.go:282] 0 containers: []
	W1222 23:58:57.547912  562224 logs.go:284] No container was found matching "kindnet"
	I1222 23:58:57.547923  562224 logs.go:123] Gathering logs for kubelet ...
	I1222 23:58:57.547937  562224 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:58:57.606575  562224 logs.go:123] Gathering logs for dmesg ...
	I1222 23:58:57.606613  562224 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:58:57.626480  562224 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:58:57.626506  562224 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:58:57.702799  562224 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:58:57.694188    9060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:58:57.695818    9060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:58:57.696498    9060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:58:57.698052    9060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:58:57.698488    9060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:58:57.694188    9060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:58:57.695818    9060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:58:57.696498    9060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:58:57.698052    9060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:58:57.698488    9060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:58:57.702818  562224 logs.go:123] Gathering logs for Docker ...
	I1222 23:58:57.702831  562224 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1222 23:58:57.723963  562224 logs.go:123] Gathering logs for container status ...
	I1222 23:58:57.723991  562224 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1222 23:58:57.758791  562224 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000754156s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1222 23:58:57.758859  562224 out.go:285] * 
	W1222 23:58:57.758931  562224 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000754156s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 23:58:57.758949  562224 out.go:285] * 
	W1222 23:58:57.759255  562224 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 23:58:57.763690  562224 out.go:203] 
	W1222 23:58:57.765170  562224 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000754156s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 23:58:57.765230  562224 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1222 23:58:57.765264  562224 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1222 23:58:57.767215  562224 out.go:203] 
	
	
	==> Docker <==
	Dec 22 23:50:52 newest-cni-348344 dockerd[1188]: time="2025-12-22T23:50:52.307858466Z" level=info msg="Restoring containers: start."
	Dec 22 23:50:52 newest-cni-348344 dockerd[1188]: time="2025-12-22T23:50:52.323372868Z" level=info msg="Deleting nftables IPv4 rules" error="exit status 1"
	Dec 22 23:50:52 newest-cni-348344 dockerd[1188]: time="2025-12-22T23:50:52.337128443Z" level=info msg="Deleting nftables IPv6 rules" error="exit status 1"
	Dec 22 23:50:52 newest-cni-348344 dockerd[1188]: time="2025-12-22T23:50:52.852390536Z" level=info msg="Loading containers: done."
	Dec 22 23:50:52 newest-cni-348344 dockerd[1188]: time="2025-12-22T23:50:52.861539478Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 22 23:50:52 newest-cni-348344 dockerd[1188]: time="2025-12-22T23:50:52.861589165Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 22 23:50:52 newest-cni-348344 dockerd[1188]: time="2025-12-22T23:50:52.861664123Z" level=info msg="Initializing buildkit"
	Dec 22 23:50:52 newest-cni-348344 dockerd[1188]: time="2025-12-22T23:50:52.879242339Z" level=info msg="Completed buildkit initialization"
	Dec 22 23:50:52 newest-cni-348344 dockerd[1188]: time="2025-12-22T23:50:52.885778595Z" level=info msg="Daemon has completed initialization"
	Dec 22 23:50:52 newest-cni-348344 dockerd[1188]: time="2025-12-22T23:50:52.885844389Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 22 23:50:52 newest-cni-348344 dockerd[1188]: time="2025-12-22T23:50:52.885923371Z" level=info msg="API listen on [::]:2376"
	Dec 22 23:50:52 newest-cni-348344 dockerd[1188]: time="2025-12-22T23:50:52.885884482Z" level=info msg="API listen on /run/docker.sock"
	Dec 22 23:50:52 newest-cni-348344 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 22 23:50:53 newest-cni-348344 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 22 23:50:53 newest-cni-348344 cri-dockerd[1478]: time="2025-12-22T23:50:53Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 22 23:50:53 newest-cni-348344 cri-dockerd[1478]: time="2025-12-22T23:50:53Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 22 23:50:53 newest-cni-348344 cri-dockerd[1478]: time="2025-12-22T23:50:53Z" level=info msg="Start docker client with request timeout 0s"
	Dec 22 23:50:53 newest-cni-348344 cri-dockerd[1478]: time="2025-12-22T23:50:53Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 22 23:50:53 newest-cni-348344 cri-dockerd[1478]: time="2025-12-22T23:50:53Z" level=info msg="Loaded network plugin cni"
	Dec 22 23:50:53 newest-cni-348344 cri-dockerd[1478]: time="2025-12-22T23:50:53Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 22 23:50:53 newest-cni-348344 cri-dockerd[1478]: time="2025-12-22T23:50:53Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 22 23:50:53 newest-cni-348344 cri-dockerd[1478]: time="2025-12-22T23:50:53Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 22 23:50:53 newest-cni-348344 cri-dockerd[1478]: time="2025-12-22T23:50:53Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 22 23:50:53 newest-cni-348344 cri-dockerd[1478]: time="2025-12-22T23:50:53Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 22 23:50:53 newest-cni-348344 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:58:58.845272    9210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:58:58.848110    9210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:58:58.848747    9210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:58:58.850491    9210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:58:58.851020    9210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6e 26 d0 5e 2a 12 08 06
	[  +0.000315] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ee 3b 16 0e 30 fb 08 06
	[Dec22 23:56] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 46 8d 7a bb 30 f9 08 06
	[ +11.914515] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1e b2 e2 cd c9 e7 08 06
	[  +0.000458] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 46 8d 7a bb 30 f9 08 06
	[Dec22 23:57] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000017] ll header: 00000000: ff ff ff ff ff ff 0a 8a 38 35 e5 59 08 06
	[  +0.181199] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 41 81 ba 80 a4 08 06
	[  +3.354929] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 44 b0 85 99 75 08 06
	[  +0.042355] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a2 51 50 e3 f8 92 08 06
	[Dec22 23:58] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 46 23 15 c0 f6 66 08 06
	[  +0.000401] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 32 44 b0 85 99 75 08 06
	[  +2.519484] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ca 64 f4 88 60 6a 08 06
	[  +0.000472] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 42 41 81 ba 80 a4 08 06
	
	
	==> kernel <==
	 23:58:58 up  3:41,  0 user,  load average: 2.34, 1.64, 1.61
	Linux newest-cni-348344 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 23:58:55 newest-cni-348344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 23:58:55 newest-cni-348344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
	Dec 22 23:58:55 newest-cni-348344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:58:55 newest-cni-348344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:58:56 newest-cni-348344 kubelet[8937]: E1222 23:58:56.033138    8937 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 23:58:56 newest-cni-348344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 23:58:56 newest-cni-348344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 23:58:56 newest-cni-348344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 22 23:58:56 newest-cni-348344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:58:56 newest-cni-348344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:58:56 newest-cni-348344 kubelet[8948]: E1222 23:58:56.793941    8948 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 23:58:56 newest-cni-348344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 23:58:56 newest-cni-348344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 23:58:57 newest-cni-348344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 22 23:58:57 newest-cni-348344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:58:57 newest-cni-348344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:58:57 newest-cni-348344 kubelet[9022]: E1222 23:58:57.544231    9022 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 23:58:57 newest-cni-348344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 23:58:57 newest-cni-348344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 23:58:58 newest-cni-348344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 22 23:58:58 newest-cni-348344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:58:58 newest-cni-348344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:58:58 newest-cni-348344 kubelet[9101]: E1222 23:58:58.301756    9101 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 23:58:58 newest-cni-348344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 23:58:58 newest-cni-348344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-348344 -n newest-cni-348344
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-348344 -n newest-cni-348344: exit status 6 (375.212669ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1222 23:58:59.351658  665519 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-348344" does not appear in /home/jenkins/minikube-integration/22301-72233/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "newest-cni-348344" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (498.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (2.55s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-063943 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context no-preload-063943 create -f testdata/busybox.yaml: exit status 1 (44.255165ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-063943" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context no-preload-063943 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-063943
helpers_test.go:244: (dbg) docker inspect no-preload-063943:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "786df4b777717287f11f0ef2eab8115dad6a21597d5995b3b84e35ed2328cebc",
	        "Created": "2025-12-22T23:45:49.557145486Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 503452,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T23:45:49.595623184Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9a87e850a5e640dd3e5f71477885272b970ba271e3722be8bebbe0157f704ffd",
	        "ResolvConfPath": "/var/lib/docker/containers/786df4b777717287f11f0ef2eab8115dad6a21597d5995b3b84e35ed2328cebc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/786df4b777717287f11f0ef2eab8115dad6a21597d5995b3b84e35ed2328cebc/hostname",
	        "HostsPath": "/var/lib/docker/containers/786df4b777717287f11f0ef2eab8115dad6a21597d5995b3b84e35ed2328cebc/hosts",
	        "LogPath": "/var/lib/docker/containers/786df4b777717287f11f0ef2eab8115dad6a21597d5995b3b84e35ed2328cebc/786df4b777717287f11f0ef2eab8115dad6a21597d5995b3b84e35ed2328cebc-json.log",
	        "Name": "/no-preload-063943",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-063943:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-063943",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "786df4b777717287f11f0ef2eab8115dad6a21597d5995b3b84e35ed2328cebc",
	                "LowerDir": "/var/lib/docker/overlay2/29902a9fc8792c76fa85dc5a0de0b07f3c2e185c6d971af2f6ebff298763d0a3-init/diff:/var/lib/docker/overlay2/c57dd1a41102d99c4ed6be3c60b871435428bd2cea6a3d8d172f0a67527ba009/diff",
	                "MergedDir": "/var/lib/docker/overlay2/29902a9fc8792c76fa85dc5a0de0b07f3c2e185c6d971af2f6ebff298763d0a3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/29902a9fc8792c76fa85dc5a0de0b07f3c2e185c6d971af2f6ebff298763d0a3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/29902a9fc8792c76fa85dc5a0de0b07f3c2e185c6d971af2f6ebff298763d0a3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-063943",
	                "Source": "/var/lib/docker/volumes/no-preload-063943/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-063943",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-063943",
	                "name.minikube.sigs.k8s.io": "no-preload-063943",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "b12aa3b274c1526f59343d87f9f299a4f40a5ab395883334ecfec940090bf65a",
	            "SandboxKey": "/var/run/docker/netns/b12aa3b274c1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-063943": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6fe1a4d651e77a6056be2344adfa00e0a1474c8d315239814c9f2b4594dd53fd",
	                    "EndpointID": "3b7f033df37f355a43561609b2804995167974287179a0903251f6f85150dc35",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "6e:80:ed:cd:a5:e1",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-063943",
	                        "786df4b77771"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-063943 -n no-preload-063943
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-063943 -n no-preload-063943: exit status 6 (310.067458ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1222 23:54:12.198093  599142 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-063943" does not appear in /home/jenkins/minikube-integration/22301-72233/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-063943 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │    PROFILE     │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p kindnet-003676 sudo systemctl status kubelet --all --full --no-pager                                                                  │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:53 UTC │ 22 Dec 25 23:53 UTC │
	│ ssh     │ -p kindnet-003676 sudo systemctl cat kubelet --no-pager                                                                                  │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:53 UTC │ 22 Dec 25 23:53 UTC │
	│ ssh     │ -p kindnet-003676 sudo journalctl -xeu kubelet --all --full --no-pager                                                                   │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:53 UTC │ 22 Dec 25 23:53 UTC │
	│ ssh     │ -p kindnet-003676 sudo cat /etc/kubernetes/kubelet.conf                                                                                  │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:53 UTC │ 22 Dec 25 23:53 UTC │
	│ ssh     │ -p kindnet-003676 sudo cat /var/lib/kubelet/config.yaml                                                                                  │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:53 UTC │ 22 Dec 25 23:53 UTC │
	│ ssh     │ -p kindnet-003676 sudo systemctl status docker --all --full --no-pager                                                                   │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:53 UTC │ 22 Dec 25 23:53 UTC │
	│ ssh     │ -p kindnet-003676 sudo systemctl cat docker --no-pager                                                                                   │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:53 UTC │ 22 Dec 25 23:53 UTC │
	│ ssh     │ -p kindnet-003676 sudo cat /etc/docker/daemon.json                                                                                       │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:53 UTC │ 22 Dec 25 23:53 UTC │
	│ ssh     │ -p kindnet-003676 sudo docker system info                                                                                                │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:53 UTC │ 22 Dec 25 23:53 UTC │
	│ ssh     │ -p kindnet-003676 sudo systemctl status cri-docker --all --full --no-pager                                                               │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:53 UTC │ 22 Dec 25 23:53 UTC │
	│ ssh     │ -p kindnet-003676 sudo systemctl cat cri-docker --no-pager                                                                               │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:53 UTC │ 22 Dec 25 23:53 UTC │
	│ ssh     │ -p kindnet-003676 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                          │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:53 UTC │ 22 Dec 25 23:53 UTC │
	│ ssh     │ -p kindnet-003676 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                    │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:53 UTC │ 22 Dec 25 23:53 UTC │
	│ ssh     │ -p kindnet-003676 sudo cri-dockerd --version                                                                                             │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:53 UTC │ 22 Dec 25 23:53 UTC │
	│ ssh     │ -p kindnet-003676 sudo systemctl status containerd --all --full --no-pager                                                               │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:53 UTC │ 22 Dec 25 23:53 UTC │
	│ ssh     │ -p kindnet-003676 sudo systemctl cat containerd --no-pager                                                                               │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:53 UTC │ 22 Dec 25 23:53 UTC │
	│ ssh     │ -p kindnet-003676 sudo cat /lib/systemd/system/containerd.service                                                                        │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:53 UTC │ 22 Dec 25 23:53 UTC │
	│ ssh     │ -p kindnet-003676 sudo cat /etc/containerd/config.toml                                                                                   │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:53 UTC │ 22 Dec 25 23:53 UTC │
	│ ssh     │ -p kindnet-003676 sudo containerd config dump                                                                                            │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:53 UTC │ 22 Dec 25 23:53 UTC │
	│ ssh     │ -p kindnet-003676 sudo systemctl status crio --all --full --no-pager                                                                     │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:53 UTC │                     │
	│ ssh     │ -p kindnet-003676 sudo systemctl cat crio --no-pager                                                                                     │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:53 UTC │ 22 Dec 25 23:54 UTC │
	│ ssh     │ -p kindnet-003676 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                           │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:54 UTC │ 22 Dec 25 23:54 UTC │
	│ ssh     │ -p kindnet-003676 sudo crio config                                                                                                       │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:54 UTC │ 22 Dec 25 23:54 UTC │
	│ delete  │ -p kindnet-003676                                                                                                                        │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:54 UTC │ 22 Dec 25 23:54 UTC │
	│ start   │ -p calico-003676 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker │ calico-003676  │ jenkins │ v1.37.0 │ 22 Dec 25 23:54 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 23:54:03
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 23:54:03.188210  596624 out.go:360] Setting OutFile to fd 1 ...
	I1222 23:54:03.188498  596624 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 23:54:03.188509  596624 out.go:374] Setting ErrFile to fd 2...
	I1222 23:54:03.188513  596624 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 23:54:03.188776  596624 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-72233/.minikube/bin
	I1222 23:54:03.189284  596624 out.go:368] Setting JSON to false
	I1222 23:54:03.190632  596624 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":12983,"bootTime":1766434660,"procs":270,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1222 23:54:03.190715  596624 start.go:143] virtualization: kvm guest
	I1222 23:54:03.192553  596624 out.go:179] * [calico-003676] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1222 23:54:03.193777  596624 out.go:179]   - MINIKUBE_LOCATION=22301
	I1222 23:54:03.193793  596624 notify.go:221] Checking for updates...
	I1222 23:54:03.196130  596624 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 23:54:03.197099  596624 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1222 23:54:03.198050  596624 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-72233/.minikube
	I1222 23:54:03.199098  596624 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1222 23:54:03.200096  596624 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 23:54:03.201403  596624 config.go:182] Loaded profile config "kubernetes-upgrade-767823": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1222 23:54:03.201498  596624 config.go:182] Loaded profile config "newest-cni-348344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1222 23:54:03.201564  596624 config.go:182] Loaded profile config "no-preload-063943": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1222 23:54:03.201680  596624 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 23:54:03.224281  596624 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1222 23:54:03.224368  596624 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 23:54:03.285405  596624 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:74 SystemTime:2025-12-22 23:54:03.274504901 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1222 23:54:03.285511  596624 docker.go:319] overlay module found
	I1222 23:54:03.287152  596624 out.go:179] * Using the docker driver based on user configuration
	I1222 23:54:03.288332  596624 start.go:309] selected driver: docker
	I1222 23:54:03.288365  596624 start.go:928] validating driver "docker" against <nil>
	I1222 23:54:03.288386  596624 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 23:54:03.289640  596624 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 23:54:03.345546  596624 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:74 SystemTime:2025-12-22 23:54:03.335879526 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1222 23:54:03.345788  596624 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1222 23:54:03.345984  596624 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1222 23:54:03.347515  596624 out.go:179] * Using Docker driver with root privileges
	I1222 23:54:03.348454  596624 cni.go:84] Creating CNI manager for "calico"
	I1222 23:54:03.348470  596624 start_flags.go:342] Found "Calico" CNI - setting NetworkPlugin=cni
	I1222 23:54:03.348524  596624 start.go:353] cluster config:
	{Name:calico-003676 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:calico-003676 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 23:54:03.349641  596624 out.go:179] * Starting "calico-003676" primary control-plane node in "calico-003676" cluster
	I1222 23:54:03.350623  596624 cache.go:134] Beginning downloading kic base image for docker with docker
	I1222 23:54:03.351623  596624 out.go:179] * Pulling base image v0.0.48-1766394456-22288 ...
	I1222 23:54:03.352613  596624 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime docker
	I1222 23:54:03.352648  596624 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4
	I1222 23:54:03.352664  596624 cache.go:65] Caching tarball of preloaded images
	I1222 23:54:03.352694  596624 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local docker daemon
	I1222 23:54:03.352746  596624 preload.go:251] Found /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1222 23:54:03.352758  596624 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on docker
	I1222 23:54:03.352883  596624 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/calico-003676/config.json ...
	I1222 23:54:03.352905  596624 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/calico-003676/config.json: {Name:mk5ed9418edb4de606d096fb81b7cc611725239f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 23:54:03.372552  596624 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local docker daemon, skipping pull
	I1222 23:54:03.372570  596624 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 exists in daemon, skipping load
	I1222 23:54:03.372584  596624 cache.go:243] Successfully downloaded all kic artifacts
	I1222 23:54:03.372651  596624 start.go:360] acquireMachinesLock for calico-003676: {Name:mk3d3711ac04e83fbd9b0eaa9538d6de80a1d211 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 23:54:03.372748  596624 start.go:364] duration metric: took 75.94µs to acquireMachinesLock for "calico-003676"
	I1222 23:54:03.372770  596624 start.go:93] Provisioning new machine with config: &{Name:calico-003676 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:calico-003676 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1222 23:54:03.372832  596624 start.go:125] createHost starting for "" (driver="docker")
	I1222 23:54:03.374984  596624 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1222 23:54:03.375157  596624 start.go:159] libmachine.API.Create for "calico-003676" (driver="docker")
	I1222 23:54:03.375183  596624 client.go:173] LocalClient.Create starting
	I1222 23:54:03.375301  596624 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem
	I1222 23:54:03.375336  596624 main.go:144] libmachine: Decoding PEM data...
	I1222 23:54:03.375352  596624 main.go:144] libmachine: Parsing certificate...
	I1222 23:54:03.375401  596624 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem
	I1222 23:54:03.375419  596624 main.go:144] libmachine: Decoding PEM data...
	I1222 23:54:03.375434  596624 main.go:144] libmachine: Parsing certificate...
	I1222 23:54:03.375786  596624 cli_runner.go:164] Run: docker network inspect calico-003676 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1222 23:54:03.391406  596624 cli_runner.go:211] docker network inspect calico-003676 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1222 23:54:03.391470  596624 network_create.go:284] running [docker network inspect calico-003676] to gather additional debugging logs...
	I1222 23:54:03.391494  596624 cli_runner.go:164] Run: docker network inspect calico-003676
	W1222 23:54:03.407573  596624 cli_runner.go:211] docker network inspect calico-003676 returned with exit code 1
	I1222 23:54:03.407617  596624 network_create.go:287] error running [docker network inspect calico-003676]: docker network inspect calico-003676: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-003676 not found
	I1222 23:54:03.407632  596624 network_create.go:289] output of [docker network inspect calico-003676]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-003676 not found
	
	** /stderr **
	I1222 23:54:03.407755  596624 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 23:54:03.424283  596624 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6d900dc18f14 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3e:30:89:aa:a7:2c} reservation:<nil>}
	I1222 23:54:03.424826  596624 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-52673d7f67eb IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:86:59:44:a0:0a:fb} reservation:<nil>}
	I1222 23:54:03.425396  596624 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f98da515e43c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f6:95:d8:50:7b:ba} reservation:<nil>}
	I1222 23:54:03.426007  596624 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-5f6692e5184d IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ca:79:d3:b1:de:45} reservation:<nil>}
	I1222 23:54:03.426798  596624 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f96810}
	I1222 23:54:03.426829  596624 network_create.go:124] attempt to create docker network calico-003676 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1222 23:54:03.426873  596624 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-003676 calico-003676
	I1222 23:54:03.472775  596624 network_create.go:108] docker network calico-003676 192.168.85.0/24 created
	I1222 23:54:03.472821  596624 kic.go:121] calculated static IP "192.168.85.2" for the "calico-003676" container
	I1222 23:54:03.472900  596624 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1222 23:54:03.489480  596624 cli_runner.go:164] Run: docker volume create calico-003676 --label name.minikube.sigs.k8s.io=calico-003676 --label created_by.minikube.sigs.k8s.io=true
	I1222 23:54:03.508864  596624 oci.go:103] Successfully created a docker volume calico-003676
	I1222 23:54:03.508964  596624 cli_runner.go:164] Run: docker run --rm --name calico-003676-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-003676 --entrypoint /usr/bin/test -v calico-003676:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 -d /var/lib
	I1222 23:54:03.886644  596624 oci.go:107] Successfully prepared a docker volume calico-003676
	I1222 23:54:03.886731  596624 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime docker
	I1222 23:54:03.886747  596624 kic.go:194] Starting extracting preloaded images to volume ...
	I1222 23:54:03.886827  596624 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-003676:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 -I lz4 -xf /preloaded.tar -C /extractDir
	I1222 23:54:07.245968  596624 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-003676:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 -I lz4 -xf /preloaded.tar -C /extractDir: (3.359084159s)
	I1222 23:54:07.246000  596624 kic.go:203] duration metric: took 3.359250344s to extract preloaded images to volume ...
	W1222 23:54:07.246121  596624 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1222 23:54:07.246213  596624 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1222 23:54:07.304342  596624 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-003676 --name calico-003676 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-003676 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-003676 --network calico-003676 --ip 192.168.85.2 --volume calico-003676:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484
	I1222 23:54:07.554410  596624 cli_runner.go:164] Run: docker container inspect calico-003676 --format={{.State.Running}}
	I1222 23:54:07.572329  596624 cli_runner.go:164] Run: docker container inspect calico-003676 --format={{.State.Status}}
	I1222 23:54:07.590271  596624 cli_runner.go:164] Run: docker exec calico-003676 stat /var/lib/dpkg/alternatives/iptables
	I1222 23:54:07.633613  596624 oci.go:144] the created container "calico-003676" has a running status.
	I1222 23:54:07.633644  596624 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22301-72233/.minikube/machines/calico-003676/id_rsa...
	I1222 23:54:07.693520  596624 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22301-72233/.minikube/machines/calico-003676/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1222 23:54:07.720269  596624 cli_runner.go:164] Run: docker container inspect calico-003676 --format={{.State.Status}}
	I1222 23:54:07.737996  596624 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1222 23:54:07.738023  596624 kic_runner.go:114] Args: [docker exec --privileged calico-003676 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1222 23:54:07.790368  596624 cli_runner.go:164] Run: docker container inspect calico-003676 --format={{.State.Status}}
	I1222 23:54:07.809086  596624 machine.go:94] provisionDockerMachine start ...
	I1222 23:54:07.809217  596624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-003676
	I1222 23:54:07.828259  596624 main.go:144] libmachine: Using SSH client type: native
	I1222 23:54:07.828636  596624 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1222 23:54:07.828657  596624 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 23:54:07.829479  596624 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56752->127.0.0.1:33128: read: connection reset by peer
	I1222 23:54:10.177108  502961 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000357414s
	I1222 23:54:10.177165  502961 kubeadm.go:319] 
	I1222 23:54:10.177310  502961 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 23:54:10.177524  502961 kubeadm.go:319] 	- The kubelet is not running
	I1222 23:54:10.177810  502961 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 23:54:10.177828  502961 kubeadm.go:319] 
	I1222 23:54:10.178077  502961 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 23:54:10.178152  502961 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 23:54:10.178219  502961 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 23:54:10.178231  502961 kubeadm.go:319] 
	I1222 23:54:10.180123  502961 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1222 23:54:10.180949  502961 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 23:54:10.181073  502961 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 23:54:10.181313  502961 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1222 23:54:10.181322  502961 kubeadm.go:319] 
	I1222 23:54:10.181426  502961 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1222 23:54:10.181463  502961 kubeadm.go:403] duration metric: took 8m3.290305083s to StartCluster
	I1222 23:54:10.181529  502961 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 23:54:10.181635  502961 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 23:54:10.218237  502961 cri.go:96] found id: ""
	I1222 23:54:10.218275  502961 logs.go:282] 0 containers: []
	W1222 23:54:10.218287  502961 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:54:10.218296  502961 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 23:54:10.218354  502961 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 23:54:10.243757  502961 cri.go:96] found id: ""
	I1222 23:54:10.243787  502961 logs.go:282] 0 containers: []
	W1222 23:54:10.243799  502961 logs.go:284] No container was found matching "etcd"
	I1222 23:54:10.243808  502961 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 23:54:10.243868  502961 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 23:54:10.273079  502961 cri.go:96] found id: ""
	I1222 23:54:10.273107  502961 logs.go:282] 0 containers: []
	W1222 23:54:10.273120  502961 logs.go:284] No container was found matching "coredns"
	I1222 23:54:10.273129  502961 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 23:54:10.273204  502961 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 23:54:10.299807  502961 cri.go:96] found id: ""
	I1222 23:54:10.299834  502961 logs.go:282] 0 containers: []
	W1222 23:54:10.299846  502961 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:54:10.299855  502961 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 23:54:10.299907  502961 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 23:54:10.324887  502961 cri.go:96] found id: ""
	I1222 23:54:10.324911  502961 logs.go:282] 0 containers: []
	W1222 23:54:10.324919  502961 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:54:10.324926  502961 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 23:54:10.324980  502961 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 23:54:10.348741  502961 cri.go:96] found id: ""
	I1222 23:54:10.348766  502961 logs.go:282] 0 containers: []
	W1222 23:54:10.348775  502961 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:54:10.348783  502961 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 23:54:10.348838  502961 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 23:54:10.372916  502961 cri.go:96] found id: ""
	I1222 23:54:10.372942  502961 logs.go:282] 0 containers: []
	W1222 23:54:10.372952  502961 logs.go:284] No container was found matching "kindnet"
	I1222 23:54:10.372966  502961 logs.go:123] Gathering logs for container status ...
	I1222 23:54:10.372982  502961 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:54:10.399951  502961 logs.go:123] Gathering logs for kubelet ...
	I1222 23:54:10.399977  502961 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:54:10.448425  502961 logs.go:123] Gathering logs for dmesg ...
	I1222 23:54:10.448459  502961 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:54:10.468561  502961 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:54:10.468588  502961 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:54:10.524190  502961 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:54:10.517433    9601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:54:10.517931    9601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:54:10.519492    9601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:54:10.519894    9601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:54:10.521397    9601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:54:10.517433    9601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:54:10.517931    9601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:54:10.519492    9601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:54:10.519894    9601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:54:10.521397    9601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:54:10.524222  502961 logs.go:123] Gathering logs for Docker ...
	I1222 23:54:10.524236  502961 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1222 23:54:10.545693  502961 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000357414s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1222 23:54:10.545755  502961 out.go:285] * 
	W1222 23:54:10.545816  502961 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000357414s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 23:54:10.545830  502961 out.go:285] * 
	W1222 23:54:10.546079  502961 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 23:54:10.548818  502961 out.go:203] 
	W1222 23:54:10.549859  502961 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000357414s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 23:54:10.549906  502961 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1222 23:54:10.549926  502961 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1222 23:54:10.551014  502961 out.go:203] 
	
	
	==> Docker <==
	Dec 22 23:45:57 no-preload-063943 dockerd[1158]: time="2025-12-22T23:45:57.067952621Z" level=info msg="Restoring containers: start."
	Dec 22 23:45:57 no-preload-063943 dockerd[1158]: time="2025-12-22T23:45:57.087206382Z" level=info msg="Deleting nftables IPv4 rules" error="exit status 1"
	Dec 22 23:45:57 no-preload-063943 dockerd[1158]: time="2025-12-22T23:45:57.105276522Z" level=info msg="Deleting nftables IPv6 rules" error="exit status 1"
	Dec 22 23:45:57 no-preload-063943 dockerd[1158]: time="2025-12-22T23:45:57.608862562Z" level=info msg="Loading containers: done."
	Dec 22 23:45:57 no-preload-063943 dockerd[1158]: time="2025-12-22T23:45:57.622047370Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 22 23:45:57 no-preload-063943 dockerd[1158]: time="2025-12-22T23:45:57.622108955Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 22 23:45:57 no-preload-063943 dockerd[1158]: time="2025-12-22T23:45:57.622142828Z" level=info msg="Initializing buildkit"
	Dec 22 23:45:57 no-preload-063943 dockerd[1158]: time="2025-12-22T23:45:57.654446856Z" level=info msg="Completed buildkit initialization"
	Dec 22 23:45:57 no-preload-063943 dockerd[1158]: time="2025-12-22T23:45:57.660274243Z" level=info msg="Daemon has completed initialization"
	Dec 22 23:45:57 no-preload-063943 dockerd[1158]: time="2025-12-22T23:45:57.660348571Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 22 23:45:57 no-preload-063943 dockerd[1158]: time="2025-12-22T23:45:57.660422079Z" level=info msg="API listen on [::]:2376"
	Dec 22 23:45:57 no-preload-063943 dockerd[1158]: time="2025-12-22T23:45:57.660348899Z" level=info msg="API listen on /run/docker.sock"
	Dec 22 23:45:57 no-preload-063943 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 22 23:45:58 no-preload-063943 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 22 23:45:58 no-preload-063943 cri-dockerd[1447]: time="2025-12-22T23:45:58Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 22 23:45:58 no-preload-063943 cri-dockerd[1447]: time="2025-12-22T23:45:58Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 22 23:45:58 no-preload-063943 cri-dockerd[1447]: time="2025-12-22T23:45:58Z" level=info msg="Start docker client with request timeout 0s"
	Dec 22 23:45:58 no-preload-063943 cri-dockerd[1447]: time="2025-12-22T23:45:58Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 22 23:45:58 no-preload-063943 cri-dockerd[1447]: time="2025-12-22T23:45:58Z" level=info msg="Loaded network plugin cni"
	Dec 22 23:45:58 no-preload-063943 cri-dockerd[1447]: time="2025-12-22T23:45:58Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 22 23:45:58 no-preload-063943 cri-dockerd[1447]: time="2025-12-22T23:45:58Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 22 23:45:58 no-preload-063943 cri-dockerd[1447]: time="2025-12-22T23:45:58Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 22 23:45:58 no-preload-063943 cri-dockerd[1447]: time="2025-12-22T23:45:58Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 22 23:45:58 no-preload-063943 cri-dockerd[1447]: time="2025-12-22T23:45:58Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 22 23:45:58 no-preload-063943 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:54:12.754675    9913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:54:12.755183    9913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:54:12.756801    9913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:54:12.758410    9913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:54:12.758864    9913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff b2 e0 b3 e5 fd 05 08 06
	[Dec22 23:48] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a2 5e 7d 42 4c be 08 06
	[ +37.051500] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ee 9d 29 a8 c7 7e 08 06
	[  +0.046977] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3a 20 ef 34 9e ff 08 06
	[  +2.780094] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e 36 71 18 35 80 08 06
	[  +0.005286] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 7e 85 6b 14 50 db 08 06
	[Dec22 23:49] IPv4: martian source 10.244.0.1 from 10.244.0.7, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 3d 46 1b 4b 15 08 06
	[  +8.285809] IPv4: martian source 10.244.0.1 from 10.244.0.10, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 42 de e5 d5 d2 d6 08 06
	[Dec22 23:50] IPv4: martian source 10.244.0.1 from 10.244.0.8, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 9c 73 09 d8 3c 08 06
	[Dec22 23:51] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff fe dd 45 92 98 69 08 06
	[  +0.005109] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ee 3b 16 0e 30 fb 08 06
	[Dec22 23:52] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6e 26 d0 5e 2a 12 08 06
	[  +0.000315] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ee 3b 16 0e 30 fb 08 06
	
	
	==> kernel <==
	 23:54:12 up  3:36,  0 user,  load average: 0.75, 1.66, 1.65
	Linux no-preload-063943 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 23:54:09 no-preload-063943 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 23:54:09 no-preload-063943 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 22 23:54:09 no-preload-063943 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:54:09 no-preload-063943 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:54:10 no-preload-063943 kubelet[9477]: E1222 23:54:10.041515    9477 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 23:54:10 no-preload-063943 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 23:54:10 no-preload-063943 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 23:54:10 no-preload-063943 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 22 23:54:10 no-preload-063943 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:54:10 no-preload-063943 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:54:10 no-preload-063943 kubelet[9613]: E1222 23:54:10.784635    9613 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 23:54:10 no-preload-063943 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 23:54:10 no-preload-063943 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 23:54:11 no-preload-063943 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 22 23:54:11 no-preload-063943 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:54:11 no-preload-063943 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:54:11 no-preload-063943 kubelet[9756]: E1222 23:54:11.547542    9756 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 23:54:11 no-preload-063943 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 23:54:11 no-preload-063943 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 23:54:12 no-preload-063943 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 22 23:54:12 no-preload-063943 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:54:12 no-preload-063943 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:54:12 no-preload-063943 kubelet[9798]: E1222 23:54:12.301568    9798 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 23:54:12 no-preload-063943 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 23:54:12 no-preload-063943 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-063943 -n no-preload-063943
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-063943 -n no-preload-063943: exit status 6 (307.382815ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1222 23:54:13.120022  599695 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-063943" does not appear in /home/jenkins/minikube-integration/22301-72233/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-063943" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-063943
helpers_test.go:244: (dbg) docker inspect no-preload-063943:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "786df4b777717287f11f0ef2eab8115dad6a21597d5995b3b84e35ed2328cebc",
	        "Created": "2025-12-22T23:45:49.557145486Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 503452,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T23:45:49.595623184Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9a87e850a5e640dd3e5f71477885272b970ba271e3722be8bebbe0157f704ffd",
	        "ResolvConfPath": "/var/lib/docker/containers/786df4b777717287f11f0ef2eab8115dad6a21597d5995b3b84e35ed2328cebc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/786df4b777717287f11f0ef2eab8115dad6a21597d5995b3b84e35ed2328cebc/hostname",
	        "HostsPath": "/var/lib/docker/containers/786df4b777717287f11f0ef2eab8115dad6a21597d5995b3b84e35ed2328cebc/hosts",
	        "LogPath": "/var/lib/docker/containers/786df4b777717287f11f0ef2eab8115dad6a21597d5995b3b84e35ed2328cebc/786df4b777717287f11f0ef2eab8115dad6a21597d5995b3b84e35ed2328cebc-json.log",
	        "Name": "/no-preload-063943",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-063943:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-063943",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "786df4b777717287f11f0ef2eab8115dad6a21597d5995b3b84e35ed2328cebc",
	                "LowerDir": "/var/lib/docker/overlay2/29902a9fc8792c76fa85dc5a0de0b07f3c2e185c6d971af2f6ebff298763d0a3-init/diff:/var/lib/docker/overlay2/c57dd1a41102d99c4ed6be3c60b871435428bd2cea6a3d8d172f0a67527ba009/diff",
	                "MergedDir": "/var/lib/docker/overlay2/29902a9fc8792c76fa85dc5a0de0b07f3c2e185c6d971af2f6ebff298763d0a3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/29902a9fc8792c76fa85dc5a0de0b07f3c2e185c6d971af2f6ebff298763d0a3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/29902a9fc8792c76fa85dc5a0de0b07f3c2e185c6d971af2f6ebff298763d0a3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-063943",
	                "Source": "/var/lib/docker/volumes/no-preload-063943/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-063943",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-063943",
	                "name.minikube.sigs.k8s.io": "no-preload-063943",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "b12aa3b274c1526f59343d87f9f299a4f40a5ab395883334ecfec940090bf65a",
	            "SandboxKey": "/var/run/docker/netns/b12aa3b274c1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-063943": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6fe1a4d651e77a6056be2344adfa00e0a1474c8d315239814c9f2b4594dd53fd",
	                    "EndpointID": "3b7f033df37f355a43561609b2804995167974287179a0903251f6f85150dc35",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "6e:80:ed:cd:a5:e1",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-063943",
	                        "786df4b77771"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-063943 -n no-preload-063943
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-063943 -n no-preload-063943: exit status 6 (309.765295ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1222 23:54:13.447400  599974 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-063943" does not appear in /home/jenkins/minikube-integration/22301-72233/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-063943 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │    PROFILE     │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p kindnet-003676 sudo systemctl status kubelet --all --full --no-pager                                                                  │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:53 UTC │ 22 Dec 25 23:53 UTC │
	│ ssh     │ -p kindnet-003676 sudo systemctl cat kubelet --no-pager                                                                                  │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:53 UTC │ 22 Dec 25 23:53 UTC │
	│ ssh     │ -p kindnet-003676 sudo journalctl -xeu kubelet --all --full --no-pager                                                                   │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:53 UTC │ 22 Dec 25 23:53 UTC │
	│ ssh     │ -p kindnet-003676 sudo cat /etc/kubernetes/kubelet.conf                                                                                  │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:53 UTC │ 22 Dec 25 23:53 UTC │
	│ ssh     │ -p kindnet-003676 sudo cat /var/lib/kubelet/config.yaml                                                                                  │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:53 UTC │ 22 Dec 25 23:53 UTC │
	│ ssh     │ -p kindnet-003676 sudo systemctl status docker --all --full --no-pager                                                                   │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:53 UTC │ 22 Dec 25 23:53 UTC │
	│ ssh     │ -p kindnet-003676 sudo systemctl cat docker --no-pager                                                                                   │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:53 UTC │ 22 Dec 25 23:53 UTC │
	│ ssh     │ -p kindnet-003676 sudo cat /etc/docker/daemon.json                                                                                       │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:53 UTC │ 22 Dec 25 23:53 UTC │
	│ ssh     │ -p kindnet-003676 sudo docker system info                                                                                                │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:53 UTC │ 22 Dec 25 23:53 UTC │
	│ ssh     │ -p kindnet-003676 sudo systemctl status cri-docker --all --full --no-pager                                                               │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:53 UTC │ 22 Dec 25 23:53 UTC │
	│ ssh     │ -p kindnet-003676 sudo systemctl cat cri-docker --no-pager                                                                               │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:53 UTC │ 22 Dec 25 23:53 UTC │
	│ ssh     │ -p kindnet-003676 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                          │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:53 UTC │ 22 Dec 25 23:53 UTC │
	│ ssh     │ -p kindnet-003676 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                    │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:53 UTC │ 22 Dec 25 23:53 UTC │
	│ ssh     │ -p kindnet-003676 sudo cri-dockerd --version                                                                                             │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:53 UTC │ 22 Dec 25 23:53 UTC │
	│ ssh     │ -p kindnet-003676 sudo systemctl status containerd --all --full --no-pager                                                               │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:53 UTC │ 22 Dec 25 23:53 UTC │
	│ ssh     │ -p kindnet-003676 sudo systemctl cat containerd --no-pager                                                                               │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:53 UTC │ 22 Dec 25 23:53 UTC │
	│ ssh     │ -p kindnet-003676 sudo cat /lib/systemd/system/containerd.service                                                                        │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:53 UTC │ 22 Dec 25 23:53 UTC │
	│ ssh     │ -p kindnet-003676 sudo cat /etc/containerd/config.toml                                                                                   │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:53 UTC │ 22 Dec 25 23:53 UTC │
	│ ssh     │ -p kindnet-003676 sudo containerd config dump                                                                                            │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:53 UTC │ 22 Dec 25 23:53 UTC │
	│ ssh     │ -p kindnet-003676 sudo systemctl status crio --all --full --no-pager                                                                     │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:53 UTC │                     │
	│ ssh     │ -p kindnet-003676 sudo systemctl cat crio --no-pager                                                                                     │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:53 UTC │ 22 Dec 25 23:54 UTC │
	│ ssh     │ -p kindnet-003676 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                           │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:54 UTC │ 22 Dec 25 23:54 UTC │
	│ ssh     │ -p kindnet-003676 sudo crio config                                                                                                       │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:54 UTC │ 22 Dec 25 23:54 UTC │
	│ delete  │ -p kindnet-003676                                                                                                                        │ kindnet-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:54 UTC │ 22 Dec 25 23:54 UTC │
	│ start   │ -p calico-003676 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker │ calico-003676  │ jenkins │ v1.37.0 │ 22 Dec 25 23:54 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 23:54:03
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 23:54:03.188210  596624 out.go:360] Setting OutFile to fd 1 ...
	I1222 23:54:03.188498  596624 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 23:54:03.188509  596624 out.go:374] Setting ErrFile to fd 2...
	I1222 23:54:03.188513  596624 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 23:54:03.188776  596624 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-72233/.minikube/bin
	I1222 23:54:03.189284  596624 out.go:368] Setting JSON to false
	I1222 23:54:03.190632  596624 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":12983,"bootTime":1766434660,"procs":270,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1222 23:54:03.190715  596624 start.go:143] virtualization: kvm guest
	I1222 23:54:03.192553  596624 out.go:179] * [calico-003676] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1222 23:54:03.193777  596624 out.go:179]   - MINIKUBE_LOCATION=22301
	I1222 23:54:03.193793  596624 notify.go:221] Checking for updates...
	I1222 23:54:03.196130  596624 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 23:54:03.197099  596624 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1222 23:54:03.198050  596624 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-72233/.minikube
	I1222 23:54:03.199098  596624 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1222 23:54:03.200096  596624 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 23:54:03.201403  596624 config.go:182] Loaded profile config "kubernetes-upgrade-767823": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1222 23:54:03.201498  596624 config.go:182] Loaded profile config "newest-cni-348344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1222 23:54:03.201564  596624 config.go:182] Loaded profile config "no-preload-063943": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1222 23:54:03.201680  596624 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 23:54:03.224281  596624 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1222 23:54:03.224368  596624 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 23:54:03.285405  596624 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:74 SystemTime:2025-12-22 23:54:03.274504901 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1222 23:54:03.285511  596624 docker.go:319] overlay module found
	I1222 23:54:03.287152  596624 out.go:179] * Using the docker driver based on user configuration
	I1222 23:54:03.288332  596624 start.go:309] selected driver: docker
	I1222 23:54:03.288365  596624 start.go:928] validating driver "docker" against <nil>
	I1222 23:54:03.288386  596624 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 23:54:03.289640  596624 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 23:54:03.345546  596624 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:74 SystemTime:2025-12-22 23:54:03.335879526 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1222 23:54:03.345788  596624 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1222 23:54:03.345984  596624 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1222 23:54:03.347515  596624 out.go:179] * Using Docker driver with root privileges
	I1222 23:54:03.348454  596624 cni.go:84] Creating CNI manager for "calico"
	I1222 23:54:03.348470  596624 start_flags.go:342] Found "Calico" CNI - setting NetworkPlugin=cni
	I1222 23:54:03.348524  596624 start.go:353] cluster config:
	{Name:calico-003676 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:calico-003676 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 23:54:03.349641  596624 out.go:179] * Starting "calico-003676" primary control-plane node in "calico-003676" cluster
	I1222 23:54:03.350623  596624 cache.go:134] Beginning downloading kic base image for docker with docker
	I1222 23:54:03.351623  596624 out.go:179] * Pulling base image v0.0.48-1766394456-22288 ...
	I1222 23:54:03.352613  596624 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime docker
	I1222 23:54:03.352648  596624 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4
	I1222 23:54:03.352664  596624 cache.go:65] Caching tarball of preloaded images
	I1222 23:54:03.352694  596624 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local docker daemon
	I1222 23:54:03.352746  596624 preload.go:251] Found /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1222 23:54:03.352758  596624 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on docker
	I1222 23:54:03.352883  596624 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/calico-003676/config.json ...
	I1222 23:54:03.352905  596624 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/calico-003676/config.json: {Name:mk5ed9418edb4de606d096fb81b7cc611725239f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 23:54:03.372552  596624 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local docker daemon, skipping pull
	I1222 23:54:03.372570  596624 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 exists in daemon, skipping load
	I1222 23:54:03.372584  596624 cache.go:243] Successfully downloaded all kic artifacts
	I1222 23:54:03.372651  596624 start.go:360] acquireMachinesLock for calico-003676: {Name:mk3d3711ac04e83fbd9b0eaa9538d6de80a1d211 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 23:54:03.372748  596624 start.go:364] duration metric: took 75.94µs to acquireMachinesLock for "calico-003676"
	I1222 23:54:03.372770  596624 start.go:93] Provisioning new machine with config: &{Name:calico-003676 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:calico-003676 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1222 23:54:03.372832  596624 start.go:125] createHost starting for "" (driver="docker")
	I1222 23:54:03.374984  596624 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1222 23:54:03.375157  596624 start.go:159] libmachine.API.Create for "calico-003676" (driver="docker")
	I1222 23:54:03.375183  596624 client.go:173] LocalClient.Create starting
	I1222 23:54:03.375301  596624 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem
	I1222 23:54:03.375336  596624 main.go:144] libmachine: Decoding PEM data...
	I1222 23:54:03.375352  596624 main.go:144] libmachine: Parsing certificate...
	I1222 23:54:03.375401  596624 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem
	I1222 23:54:03.375419  596624 main.go:144] libmachine: Decoding PEM data...
	I1222 23:54:03.375434  596624 main.go:144] libmachine: Parsing certificate...
	I1222 23:54:03.375786  596624 cli_runner.go:164] Run: docker network inspect calico-003676 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1222 23:54:03.391406  596624 cli_runner.go:211] docker network inspect calico-003676 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1222 23:54:03.391470  596624 network_create.go:284] running [docker network inspect calico-003676] to gather additional debugging logs...
	I1222 23:54:03.391494  596624 cli_runner.go:164] Run: docker network inspect calico-003676
	W1222 23:54:03.407573  596624 cli_runner.go:211] docker network inspect calico-003676 returned with exit code 1
	I1222 23:54:03.407617  596624 network_create.go:287] error running [docker network inspect calico-003676]: docker network inspect calico-003676: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-003676 not found
	I1222 23:54:03.407632  596624 network_create.go:289] output of [docker network inspect calico-003676]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-003676 not found
	
	** /stderr **
	I1222 23:54:03.407755  596624 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 23:54:03.424283  596624 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6d900dc18f14 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3e:30:89:aa:a7:2c} reservation:<nil>}
	I1222 23:54:03.424826  596624 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-52673d7f67eb IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:86:59:44:a0:0a:fb} reservation:<nil>}
	I1222 23:54:03.425396  596624 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f98da515e43c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f6:95:d8:50:7b:ba} reservation:<nil>}
	I1222 23:54:03.426007  596624 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-5f6692e5184d IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ca:79:d3:b1:de:45} reservation:<nil>}
	I1222 23:54:03.426798  596624 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f96810}
	I1222 23:54:03.426829  596624 network_create.go:124] attempt to create docker network calico-003676 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1222 23:54:03.426873  596624 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-003676 calico-003676
	I1222 23:54:03.472775  596624 network_create.go:108] docker network calico-003676 192.168.85.0/24 created
	I1222 23:54:03.472821  596624 kic.go:121] calculated static IP "192.168.85.2" for the "calico-003676" container
	I1222 23:54:03.472900  596624 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1222 23:54:03.489480  596624 cli_runner.go:164] Run: docker volume create calico-003676 --label name.minikube.sigs.k8s.io=calico-003676 --label created_by.minikube.sigs.k8s.io=true
	I1222 23:54:03.508864  596624 oci.go:103] Successfully created a docker volume calico-003676
	I1222 23:54:03.508964  596624 cli_runner.go:164] Run: docker run --rm --name calico-003676-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-003676 --entrypoint /usr/bin/test -v calico-003676:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 -d /var/lib
	I1222 23:54:03.886644  596624 oci.go:107] Successfully prepared a docker volume calico-003676
	I1222 23:54:03.886731  596624 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime docker
	I1222 23:54:03.886747  596624 kic.go:194] Starting extracting preloaded images to volume ...
	I1222 23:54:03.886827  596624 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-003676:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 -I lz4 -xf /preloaded.tar -C /extractDir
	I1222 23:54:07.245968  596624 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-003676:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 -I lz4 -xf /preloaded.tar -C /extractDir: (3.359084159s)
	I1222 23:54:07.246000  596624 kic.go:203] duration metric: took 3.359250344s to extract preloaded images to volume ...
	W1222 23:54:07.246121  596624 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1222 23:54:07.246213  596624 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1222 23:54:07.304342  596624 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-003676 --name calico-003676 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-003676 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-003676 --network calico-003676 --ip 192.168.85.2 --volume calico-003676:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484
	I1222 23:54:07.554410  596624 cli_runner.go:164] Run: docker container inspect calico-003676 --format={{.State.Running}}
	I1222 23:54:07.572329  596624 cli_runner.go:164] Run: docker container inspect calico-003676 --format={{.State.Status}}
	I1222 23:54:07.590271  596624 cli_runner.go:164] Run: docker exec calico-003676 stat /var/lib/dpkg/alternatives/iptables
	I1222 23:54:07.633613  596624 oci.go:144] the created container "calico-003676" has a running status.
	I1222 23:54:07.633644  596624 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22301-72233/.minikube/machines/calico-003676/id_rsa...
	I1222 23:54:07.693520  596624 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22301-72233/.minikube/machines/calico-003676/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1222 23:54:07.720269  596624 cli_runner.go:164] Run: docker container inspect calico-003676 --format={{.State.Status}}
	I1222 23:54:07.737996  596624 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1222 23:54:07.738023  596624 kic_runner.go:114] Args: [docker exec --privileged calico-003676 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1222 23:54:07.790368  596624 cli_runner.go:164] Run: docker container inspect calico-003676 --format={{.State.Status}}
	I1222 23:54:07.809086  596624 machine.go:94] provisionDockerMachine start ...
	I1222 23:54:07.809217  596624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-003676
	I1222 23:54:07.828259  596624 main.go:144] libmachine: Using SSH client type: native
	I1222 23:54:07.828636  596624 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1222 23:54:07.828657  596624 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 23:54:07.829479  596624 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56752->127.0.0.1:33128: read: connection reset by peer
	I1222 23:54:10.177108  502961 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000357414s
	I1222 23:54:10.177165  502961 kubeadm.go:319] 
	I1222 23:54:10.177310  502961 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1222 23:54:10.177524  502961 kubeadm.go:319] 	- The kubelet is not running
	I1222 23:54:10.177810  502961 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1222 23:54:10.177828  502961 kubeadm.go:319] 
	I1222 23:54:10.178077  502961 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1222 23:54:10.178152  502961 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1222 23:54:10.178219  502961 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1222 23:54:10.178231  502961 kubeadm.go:319] 
	I1222 23:54:10.180123  502961 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1222 23:54:10.180949  502961 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1222 23:54:10.181073  502961 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1222 23:54:10.181313  502961 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1222 23:54:10.181322  502961 kubeadm.go:319] 
	I1222 23:54:10.181426  502961 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1222 23:54:10.181463  502961 kubeadm.go:403] duration metric: took 8m3.290305083s to StartCluster
	I1222 23:54:10.181529  502961 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1222 23:54:10.181635  502961 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1222 23:54:10.218237  502961 cri.go:96] found id: ""
	I1222 23:54:10.218275  502961 logs.go:282] 0 containers: []
	W1222 23:54:10.218287  502961 logs.go:284] No container was found matching "kube-apiserver"
	I1222 23:54:10.218296  502961 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1222 23:54:10.218354  502961 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1222 23:54:10.243757  502961 cri.go:96] found id: ""
	I1222 23:54:10.243787  502961 logs.go:282] 0 containers: []
	W1222 23:54:10.243799  502961 logs.go:284] No container was found matching "etcd"
	I1222 23:54:10.243808  502961 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1222 23:54:10.243868  502961 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1222 23:54:10.273079  502961 cri.go:96] found id: ""
	I1222 23:54:10.273107  502961 logs.go:282] 0 containers: []
	W1222 23:54:10.273120  502961 logs.go:284] No container was found matching "coredns"
	I1222 23:54:10.273129  502961 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1222 23:54:10.273204  502961 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1222 23:54:10.299807  502961 cri.go:96] found id: ""
	I1222 23:54:10.299834  502961 logs.go:282] 0 containers: []
	W1222 23:54:10.299846  502961 logs.go:284] No container was found matching "kube-scheduler"
	I1222 23:54:10.299855  502961 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1222 23:54:10.299907  502961 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1222 23:54:10.324887  502961 cri.go:96] found id: ""
	I1222 23:54:10.324911  502961 logs.go:282] 0 containers: []
	W1222 23:54:10.324919  502961 logs.go:284] No container was found matching "kube-proxy"
	I1222 23:54:10.324926  502961 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1222 23:54:10.324980  502961 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1222 23:54:10.348741  502961 cri.go:96] found id: ""
	I1222 23:54:10.348766  502961 logs.go:282] 0 containers: []
	W1222 23:54:10.348775  502961 logs.go:284] No container was found matching "kube-controller-manager"
	I1222 23:54:10.348783  502961 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1222 23:54:10.348838  502961 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1222 23:54:10.372916  502961 cri.go:96] found id: ""
	I1222 23:54:10.372942  502961 logs.go:282] 0 containers: []
	W1222 23:54:10.372952  502961 logs.go:284] No container was found matching "kindnet"
	I1222 23:54:10.372966  502961 logs.go:123] Gathering logs for container status ...
	I1222 23:54:10.372982  502961 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1222 23:54:10.399951  502961 logs.go:123] Gathering logs for kubelet ...
	I1222 23:54:10.399977  502961 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1222 23:54:10.448425  502961 logs.go:123] Gathering logs for dmesg ...
	I1222 23:54:10.448459  502961 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1222 23:54:10.468561  502961 logs.go:123] Gathering logs for describe nodes ...
	I1222 23:54:10.468588  502961 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1222 23:54:10.524190  502961 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:54:10.517433    9601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:54:10.517931    9601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:54:10.519492    9601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:54:10.519894    9601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:54:10.521397    9601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1222 23:54:10.517433    9601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:54:10.517931    9601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:54:10.519492    9601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:54:10.519894    9601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:54:10.521397    9601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1222 23:54:10.524222  502961 logs.go:123] Gathering logs for Docker ...
	I1222 23:54:10.524236  502961 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1222 23:54:10.545693  502961 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000357414s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1222 23:54:10.545755  502961 out.go:285] * 
	W1222 23:54:10.545816  502961 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000357414s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 23:54:10.545830  502961 out.go:285] * 
	W1222 23:54:10.546079  502961 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1222 23:54:10.548818  502961 out.go:203] 
	W1222 23:54:10.549859  502961 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1045-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000357414s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1222 23:54:10.549906  502961 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1222 23:54:10.549926  502961 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1222 23:54:10.551014  502961 out.go:203] 
	I1222 23:54:10.980884  596624 main.go:144] libmachine: SSH cmd err, output: <nil>: calico-003676
	
	I1222 23:54:10.980920  596624 ubuntu.go:182] provisioning hostname "calico-003676"
	I1222 23:54:10.980986  596624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-003676
	I1222 23:54:11.000017  596624 main.go:144] libmachine: Using SSH client type: native
	I1222 23:54:11.000339  596624 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1222 23:54:11.000364  596624 main.go:144] libmachine: About to run SSH command:
	sudo hostname calico-003676 && echo "calico-003676" | sudo tee /etc/hostname
	I1222 23:54:11.158608  596624 main.go:144] libmachine: SSH cmd err, output: <nil>: calico-003676
	
	I1222 23:54:11.158685  596624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-003676
	I1222 23:54:11.176003  596624 main.go:144] libmachine: Using SSH client type: native
	I1222 23:54:11.176230  596624 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1222 23:54:11.176248  596624 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-003676' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-003676/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-003676' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 23:54:11.320821  596624 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 23:54:11.320851  596624 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22301-72233/.minikube CaCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22301-72233/.minikube}
	I1222 23:54:11.320888  596624 ubuntu.go:190] setting up certificates
	I1222 23:54:11.320901  596624 provision.go:84] configureAuth start
	I1222 23:54:11.320957  596624 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-003676
	I1222 23:54:11.340225  596624 provision.go:143] copyHostCerts
	I1222 23:54:11.340286  596624 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem, removing ...
	I1222 23:54:11.340297  596624 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem
	I1222 23:54:11.340361  596624 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem (1123 bytes)
	I1222 23:54:11.340447  596624 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem, removing ...
	I1222 23:54:11.340455  596624 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem
	I1222 23:54:11.340491  596624 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem (1679 bytes)
	I1222 23:54:11.340542  596624 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem, removing ...
	I1222 23:54:11.340549  596624 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem
	I1222 23:54:11.340571  596624 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem (1082 bytes)
	I1222 23:54:11.340665  596624 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem org=jenkins.calico-003676 san=[127.0.0.1 192.168.85.2 calico-003676 localhost minikube]
	I1222 23:54:11.363408  596624 provision.go:177] copyRemoteCerts
	I1222 23:54:11.363462  596624 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 23:54:11.363500  596624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-003676
	I1222 23:54:11.381956  596624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/calico-003676/id_rsa Username:docker}
	I1222 23:54:11.485911  596624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 23:54:11.506211  596624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1222 23:54:11.526880  596624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1222 23:54:11.552319  596624 provision.go:87] duration metric: took 231.404245ms to configureAuth
	I1222 23:54:11.552345  596624 ubuntu.go:206] setting minikube options for container-runtime
	I1222 23:54:11.552505  596624 config.go:182] Loaded profile config "calico-003676": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1222 23:54:11.552555  596624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-003676
	I1222 23:54:11.571562  596624 main.go:144] libmachine: Using SSH client type: native
	I1222 23:54:11.571880  596624 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1222 23:54:11.571899  596624 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1222 23:54:11.719526  596624 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1222 23:54:11.719551  596624 ubuntu.go:71] root file system type: overlay
	I1222 23:54:11.719694  596624 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1222 23:54:11.719751  596624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-003676
	I1222 23:54:11.736897  596624 main.go:144] libmachine: Using SSH client type: native
	I1222 23:54:11.737159  596624 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1222 23:54:11.737217  596624 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1222 23:54:11.897302  596624 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1222 23:54:11.897390  596624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-003676
	I1222 23:54:11.915666  596624 main.go:144] libmachine: Using SSH client type: native
	I1222 23:54:11.915945  596624 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1222 23:54:11.915973  596624 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1222 23:54:13.095993  596624 main.go:144] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-12 14:48:15.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-22 23:54:11.895429757 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1222 23:54:13.096024  596624 machine.go:97] duration metric: took 5.286914245s to provisionDockerMachine
	I1222 23:54:13.096038  596624 client.go:176] duration metric: took 9.720847267s to LocalClient.Create
	I1222 23:54:13.096059  596624 start.go:167] duration metric: took 9.720900045s to libmachine.API.Create "calico-003676"
	I1222 23:54:13.096067  596624 start.go:293] postStartSetup for "calico-003676" (driver="docker")
	I1222 23:54:13.096080  596624 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 23:54:13.096156  596624 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 23:54:13.096205  596624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-003676
	I1222 23:54:13.117466  596624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/calico-003676/id_rsa Username:docker}
	
	
	==> Docker <==
	Dec 22 23:45:57 no-preload-063943 dockerd[1158]: time="2025-12-22T23:45:57.067952621Z" level=info msg="Restoring containers: start."
	Dec 22 23:45:57 no-preload-063943 dockerd[1158]: time="2025-12-22T23:45:57.087206382Z" level=info msg="Deleting nftables IPv4 rules" error="exit status 1"
	Dec 22 23:45:57 no-preload-063943 dockerd[1158]: time="2025-12-22T23:45:57.105276522Z" level=info msg="Deleting nftables IPv6 rules" error="exit status 1"
	Dec 22 23:45:57 no-preload-063943 dockerd[1158]: time="2025-12-22T23:45:57.608862562Z" level=info msg="Loading containers: done."
	Dec 22 23:45:57 no-preload-063943 dockerd[1158]: time="2025-12-22T23:45:57.622047370Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 22 23:45:57 no-preload-063943 dockerd[1158]: time="2025-12-22T23:45:57.622108955Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 22 23:45:57 no-preload-063943 dockerd[1158]: time="2025-12-22T23:45:57.622142828Z" level=info msg="Initializing buildkit"
	Dec 22 23:45:57 no-preload-063943 dockerd[1158]: time="2025-12-22T23:45:57.654446856Z" level=info msg="Completed buildkit initialization"
	Dec 22 23:45:57 no-preload-063943 dockerd[1158]: time="2025-12-22T23:45:57.660274243Z" level=info msg="Daemon has completed initialization"
	Dec 22 23:45:57 no-preload-063943 dockerd[1158]: time="2025-12-22T23:45:57.660348571Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 22 23:45:57 no-preload-063943 dockerd[1158]: time="2025-12-22T23:45:57.660422079Z" level=info msg="API listen on [::]:2376"
	Dec 22 23:45:57 no-preload-063943 dockerd[1158]: time="2025-12-22T23:45:57.660348899Z" level=info msg="API listen on /run/docker.sock"
	Dec 22 23:45:57 no-preload-063943 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 22 23:45:58 no-preload-063943 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 22 23:45:58 no-preload-063943 cri-dockerd[1447]: time="2025-12-22T23:45:58Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 22 23:45:58 no-preload-063943 cri-dockerd[1447]: time="2025-12-22T23:45:58Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 22 23:45:58 no-preload-063943 cri-dockerd[1447]: time="2025-12-22T23:45:58Z" level=info msg="Start docker client with request timeout 0s"
	Dec 22 23:45:58 no-preload-063943 cri-dockerd[1447]: time="2025-12-22T23:45:58Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 22 23:45:58 no-preload-063943 cri-dockerd[1447]: time="2025-12-22T23:45:58Z" level=info msg="Loaded network plugin cni"
	Dec 22 23:45:58 no-preload-063943 cri-dockerd[1447]: time="2025-12-22T23:45:58Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 22 23:45:58 no-preload-063943 cri-dockerd[1447]: time="2025-12-22T23:45:58Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 22 23:45:58 no-preload-063943 cri-dockerd[1447]: time="2025-12-22T23:45:58Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 22 23:45:58 no-preload-063943 cri-dockerd[1447]: time="2025-12-22T23:45:58Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 22 23:45:58 no-preload-063943 cri-dockerd[1447]: time="2025-12-22T23:45:58Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 22 23:45:58 no-preload-063943 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:54:13.999940   10087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:54:14.000423   10087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:54:14.002019   10087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:54:14.002504   10087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:54:14.004099   10087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff b2 e0 b3 e5 fd 05 08 06
	[Dec22 23:48] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a2 5e 7d 42 4c be 08 06
	[ +37.051500] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ee 9d 29 a8 c7 7e 08 06
	[  +0.046977] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3a 20 ef 34 9e ff 08 06
	[  +2.780094] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e 36 71 18 35 80 08 06
	[  +0.005286] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 7e 85 6b 14 50 db 08 06
	[Dec22 23:49] IPv4: martian source 10.244.0.1 from 10.244.0.7, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 3d 46 1b 4b 15 08 06
	[  +8.285809] IPv4: martian source 10.244.0.1 from 10.244.0.10, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 42 de e5 d5 d2 d6 08 06
	[Dec22 23:50] IPv4: martian source 10.244.0.1 from 10.244.0.8, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 9c 73 09 d8 3c 08 06
	[Dec22 23:51] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff fe dd 45 92 98 69 08 06
	[  +0.005109] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ee 3b 16 0e 30 fb 08 06
	[Dec22 23:52] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6e 26 d0 5e 2a 12 08 06
	[  +0.000315] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ee 3b 16 0e 30 fb 08 06
	
	
	==> kernel <==
	 23:54:14 up  3:36,  0 user,  load average: 0.93, 1.68, 1.65
	Linux no-preload-063943 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 23:54:10 no-preload-063943 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 23:54:11 no-preload-063943 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 22 23:54:11 no-preload-063943 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:54:11 no-preload-063943 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:54:11 no-preload-063943 kubelet[9756]: E1222 23:54:11.547542    9756 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 23:54:11 no-preload-063943 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 23:54:11 no-preload-063943 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 23:54:12 no-preload-063943 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 22 23:54:12 no-preload-063943 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:54:12 no-preload-063943 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:54:12 no-preload-063943 kubelet[9798]: E1222 23:54:12.301568    9798 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 23:54:12 no-preload-063943 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 23:54:12 no-preload-063943 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 23:54:12 no-preload-063943 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 22 23:54:12 no-preload-063943 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:54:12 no-preload-063943 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:54:13 no-preload-063943 kubelet[9934]: E1222 23:54:13.043099    9934 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 23:54:13 no-preload-063943 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 23:54:13 no-preload-063943 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 23:54:13 no-preload-063943 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 324.
	Dec 22 23:54:13 no-preload-063943 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:54:13 no-preload-063943 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:54:13 no-preload-063943 kubelet[9980]: E1222 23:54:13.802172    9980 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 23:54:13 no-preload-063943 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 23:54:13 no-preload-063943 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-063943 -n no-preload-063943
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-063943 -n no-preload-063943: exit status 6 (304.642828ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1222 23:54:14.379668  600579 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-063943" does not appear in /home/jenkins/minikube-integration/22301-72233/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-063943" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (2.55s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (110.79s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-063943 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1222 23:54:16.627682   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/default-k8s-diff-port-700304/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:54:26.868318   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/default-k8s-diff-port-700304/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:54:37.275759   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/old-k8s-version-687073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:54:47.349283   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/default-k8s-diff-port-700304/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:54:53.440896   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-063943 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m49.456926379s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-063943 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-063943 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-063943 describe deploy/metrics-server -n kube-system: exit status 1 (44.766618ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-063943" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-063943 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-063943
helpers_test.go:244: (dbg) docker inspect no-preload-063943:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "786df4b777717287f11f0ef2eab8115dad6a21597d5995b3b84e35ed2328cebc",
	        "Created": "2025-12-22T23:45:49.557145486Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 503452,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T23:45:49.595623184Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9a87e850a5e640dd3e5f71477885272b970ba271e3722be8bebbe0157f704ffd",
	        "ResolvConfPath": "/var/lib/docker/containers/786df4b777717287f11f0ef2eab8115dad6a21597d5995b3b84e35ed2328cebc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/786df4b777717287f11f0ef2eab8115dad6a21597d5995b3b84e35ed2328cebc/hostname",
	        "HostsPath": "/var/lib/docker/containers/786df4b777717287f11f0ef2eab8115dad6a21597d5995b3b84e35ed2328cebc/hosts",
	        "LogPath": "/var/lib/docker/containers/786df4b777717287f11f0ef2eab8115dad6a21597d5995b3b84e35ed2328cebc/786df4b777717287f11f0ef2eab8115dad6a21597d5995b3b84e35ed2328cebc-json.log",
	        "Name": "/no-preload-063943",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-063943:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-063943",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "786df4b777717287f11f0ef2eab8115dad6a21597d5995b3b84e35ed2328cebc",
	                "LowerDir": "/var/lib/docker/overlay2/29902a9fc8792c76fa85dc5a0de0b07f3c2e185c6d971af2f6ebff298763d0a3-init/diff:/var/lib/docker/overlay2/c57dd1a41102d99c4ed6be3c60b871435428bd2cea6a3d8d172f0a67527ba009/diff",
	                "MergedDir": "/var/lib/docker/overlay2/29902a9fc8792c76fa85dc5a0de0b07f3c2e185c6d971af2f6ebff298763d0a3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/29902a9fc8792c76fa85dc5a0de0b07f3c2e185c6d971af2f6ebff298763d0a3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/29902a9fc8792c76fa85dc5a0de0b07f3c2e185c6d971af2f6ebff298763d0a3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-063943",
	                "Source": "/var/lib/docker/volumes/no-preload-063943/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-063943",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-063943",
	                "name.minikube.sigs.k8s.io": "no-preload-063943",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "b12aa3b274c1526f59343d87f9f299a4f40a5ab395883334ecfec940090bf65a",
	            "SandboxKey": "/var/run/docker/netns/b12aa3b274c1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-063943": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6fe1a4d651e77a6056be2344adfa00e0a1474c8d315239814c9f2b4594dd53fd",
	                    "EndpointID": "3b7f033df37f355a43561609b2804995167974287179a0903251f6f85150dc35",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "6e:80:ed:cd:a5:e1",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-063943",
	                        "786df4b77771"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-063943 -n no-preload-063943
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-063943 -n no-preload-063943: exit status 6 (298.955885ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1222 23:56:04.199556  621501 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-063943" does not appear in /home/jenkins/minikube-integration/22301-72233/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-063943 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                 ARGS                                                                                 │        PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p calico-003676 sudo systemctl status kubelet --all --full --no-pager                                                                                               │ calico-003676         │ jenkins │ v1.37.0 │ 22 Dec 25 23:55 UTC │ 22 Dec 25 23:55 UTC │
	│ ssh     │ -p calico-003676 sudo systemctl cat kubelet --no-pager                                                                                                               │ calico-003676         │ jenkins │ v1.37.0 │ 22 Dec 25 23:55 UTC │ 22 Dec 25 23:55 UTC │
	│ ssh     │ -p calico-003676 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                                │ calico-003676         │ jenkins │ v1.37.0 │ 22 Dec 25 23:55 UTC │ 22 Dec 25 23:55 UTC │
	│ ssh     │ -p calico-003676 sudo cat /etc/kubernetes/kubelet.conf                                                                                                               │ calico-003676         │ jenkins │ v1.37.0 │ 22 Dec 25 23:55 UTC │ 22 Dec 25 23:55 UTC │
	│ ssh     │ -p calico-003676 sudo cat /var/lib/kubelet/config.yaml                                                                                                               │ calico-003676         │ jenkins │ v1.37.0 │ 22 Dec 25 23:55 UTC │ 22 Dec 25 23:55 UTC │
	│ ssh     │ -p calico-003676 sudo systemctl status docker --all --full --no-pager                                                                                                │ calico-003676         │ jenkins │ v1.37.0 │ 22 Dec 25 23:55 UTC │ 22 Dec 25 23:55 UTC │
	│ ssh     │ -p calico-003676 sudo systemctl cat docker --no-pager                                                                                                                │ calico-003676         │ jenkins │ v1.37.0 │ 22 Dec 25 23:55 UTC │ 22 Dec 25 23:55 UTC │
	│ ssh     │ -p calico-003676 sudo cat /etc/docker/daemon.json                                                                                                                    │ calico-003676         │ jenkins │ v1.37.0 │ 22 Dec 25 23:55 UTC │ 22 Dec 25 23:55 UTC │
	│ ssh     │ -p calico-003676 sudo docker system info                                                                                                                             │ calico-003676         │ jenkins │ v1.37.0 │ 22 Dec 25 23:55 UTC │ 22 Dec 25 23:55 UTC │
	│ ssh     │ -p calico-003676 sudo systemctl status cri-docker --all --full --no-pager                                                                                            │ calico-003676         │ jenkins │ v1.37.0 │ 22 Dec 25 23:55 UTC │ 22 Dec 25 23:55 UTC │
	│ ssh     │ -p calico-003676 sudo systemctl cat cri-docker --no-pager                                                                                                            │ calico-003676         │ jenkins │ v1.37.0 │ 22 Dec 25 23:55 UTC │ 22 Dec 25 23:55 UTC │
	│ ssh     │ -p calico-003676 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                       │ calico-003676         │ jenkins │ v1.37.0 │ 22 Dec 25 23:55 UTC │ 22 Dec 25 23:55 UTC │
	│ ssh     │ -p calico-003676 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                 │ calico-003676         │ jenkins │ v1.37.0 │ 22 Dec 25 23:55 UTC │ 22 Dec 25 23:55 UTC │
	│ ssh     │ -p calico-003676 sudo cri-dockerd --version                                                                                                                          │ calico-003676         │ jenkins │ v1.37.0 │ 22 Dec 25 23:55 UTC │ 22 Dec 25 23:55 UTC │
	│ ssh     │ -p calico-003676 sudo systemctl status containerd --all --full --no-pager                                                                                            │ calico-003676         │ jenkins │ v1.37.0 │ 22 Dec 25 23:55 UTC │ 22 Dec 25 23:55 UTC │
	│ ssh     │ -p calico-003676 sudo systemctl cat containerd --no-pager                                                                                                            │ calico-003676         │ jenkins │ v1.37.0 │ 22 Dec 25 23:55 UTC │ 22 Dec 25 23:55 UTC │
	│ ssh     │ -p calico-003676 sudo cat /lib/systemd/system/containerd.service                                                                                                     │ calico-003676         │ jenkins │ v1.37.0 │ 22 Dec 25 23:55 UTC │ 22 Dec 25 23:55 UTC │
	│ ssh     │ -p calico-003676 sudo cat /etc/containerd/config.toml                                                                                                                │ calico-003676         │ jenkins │ v1.37.0 │ 22 Dec 25 23:55 UTC │ 22 Dec 25 23:55 UTC │
	│ ssh     │ -p calico-003676 sudo containerd config dump                                                                                                                         │ calico-003676         │ jenkins │ v1.37.0 │ 22 Dec 25 23:55 UTC │ 22 Dec 25 23:55 UTC │
	│ ssh     │ -p calico-003676 sudo systemctl status crio --all --full --no-pager                                                                                                  │ calico-003676         │ jenkins │ v1.37.0 │ 22 Dec 25 23:55 UTC │                     │
	│ ssh     │ -p calico-003676 sudo systemctl cat crio --no-pager                                                                                                                  │ calico-003676         │ jenkins │ v1.37.0 │ 22 Dec 25 23:55 UTC │ 22 Dec 25 23:55 UTC │
	│ ssh     │ -p calico-003676 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                        │ calico-003676         │ jenkins │ v1.37.0 │ 22 Dec 25 23:55 UTC │ 22 Dec 25 23:55 UTC │
	│ ssh     │ -p calico-003676 sudo crio config                                                                                                                                    │ calico-003676         │ jenkins │ v1.37.0 │ 22 Dec 25 23:55 UTC │ 22 Dec 25 23:55 UTC │
	│ delete  │ -p calico-003676                                                                                                                                                     │ calico-003676         │ jenkins │ v1.37.0 │ 22 Dec 25 23:55 UTC │ 22 Dec 25 23:55 UTC │
	│ start   │ -p custom-flannel-003676 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker │ custom-flannel-003676 │ jenkins │ v1.37.0 │ 22 Dec 25 23:55 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 23:55:46
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 23:55:46.623801  617786 out.go:360] Setting OutFile to fd 1 ...
	I1222 23:55:46.624030  617786 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 23:55:46.624038  617786 out.go:374] Setting ErrFile to fd 2...
	I1222 23:55:46.624042  617786 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 23:55:46.624234  617786 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-72233/.minikube/bin
	I1222 23:55:46.624719  617786 out.go:368] Setting JSON to false
	I1222 23:55:46.625822  617786 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":13087,"bootTime":1766434660,"procs":263,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1222 23:55:46.625875  617786 start.go:143] virtualization: kvm guest
	I1222 23:55:46.627960  617786 out.go:179] * [custom-flannel-003676] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1222 23:55:46.629173  617786 out.go:179]   - MINIKUBE_LOCATION=22301
	I1222 23:55:46.629179  617786 notify.go:221] Checking for updates...
	I1222 23:55:46.631446  617786 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 23:55:46.632380  617786 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1222 23:55:46.633361  617786 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-72233/.minikube
	I1222 23:55:46.634390  617786 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1222 23:55:46.635478  617786 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 23:55:46.636929  617786 config.go:182] Loaded profile config "kubernetes-upgrade-767823": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1222 23:55:46.637042  617786 config.go:182] Loaded profile config "newest-cni-348344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1222 23:55:46.637130  617786 config.go:182] Loaded profile config "no-preload-063943": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1222 23:55:46.637231  617786 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 23:55:46.660388  617786 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1222 23:55:46.660490  617786 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 23:55:46.713888  617786 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:74 SystemTime:2025-12-22 23:55:46.704546834 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1222 23:55:46.714028  617786 docker.go:319] overlay module found
	I1222 23:55:46.715699  617786 out.go:179] * Using the docker driver based on user configuration
	I1222 23:55:46.716734  617786 start.go:309] selected driver: docker
	I1222 23:55:46.716748  617786 start.go:928] validating driver "docker" against <nil>
	I1222 23:55:46.716762  617786 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 23:55:46.717887  617786 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 23:55:46.775055  617786 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:74 SystemTime:2025-12-22 23:55:46.765914644 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1222 23:55:46.775217  617786 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1222 23:55:46.775427  617786 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1222 23:55:46.776934  617786 out.go:179] * Using Docker driver with root privileges
	I1222 23:55:46.777990  617786 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1222 23:55:46.778014  617786 start_flags.go:342] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1222 23:55:46.778072  617786 start.go:353] cluster config:
	{Name:custom-flannel-003676 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:custom-flannel-003676 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetP
ath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 23:55:46.779223  617786 out.go:179] * Starting "custom-flannel-003676" primary control-plane node in "custom-flannel-003676" cluster
	I1222 23:55:46.780178  617786 cache.go:134] Beginning downloading kic base image for docker with docker
	I1222 23:55:46.781286  617786 out.go:179] * Pulling base image v0.0.48-1766394456-22288 ...
	I1222 23:55:46.782248  617786 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime docker
	I1222 23:55:46.782276  617786 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4
	I1222 23:55:46.782289  617786 cache.go:65] Caching tarball of preloaded images
	I1222 23:55:46.782340  617786 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local docker daemon
	I1222 23:55:46.782366  617786 preload.go:251] Found /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1222 23:55:46.782373  617786 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on docker
	I1222 23:55:46.782473  617786 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/custom-flannel-003676/config.json ...
	I1222 23:55:46.782495  617786 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/custom-flannel-003676/config.json: {Name:mka64dc1f84dc0fa178dd016ad7dcefb44f90dc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 23:55:46.802521  617786 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local docker daemon, skipping pull
	I1222 23:55:46.802543  617786 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 exists in daemon, skipping load
	I1222 23:55:46.802557  617786 cache.go:243] Successfully downloaded all kic artifacts
	I1222 23:55:46.802609  617786 start.go:360] acquireMachinesLock for custom-flannel-003676: {Name:mk0aa80e285f67670f6250b040e12c585b3d302b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 23:55:46.802714  617786 start.go:364] duration metric: took 84.285µs to acquireMachinesLock for "custom-flannel-003676"
	I1222 23:55:46.802738  617786 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-003676 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:custom-flannel-003676 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false D
isableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1222 23:55:46.802807  617786 start.go:125] createHost starting for "" (driver="docker")
	I1222 23:55:46.804565  617786 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1222 23:55:46.804794  617786 start.go:159] libmachine.API.Create for "custom-flannel-003676" (driver="docker")
	I1222 23:55:46.804823  617786 client.go:173] LocalClient.Create starting
	I1222 23:55:46.804903  617786 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem
	I1222 23:55:46.804933  617786 main.go:144] libmachine: Decoding PEM data...
	I1222 23:55:46.804949  617786 main.go:144] libmachine: Parsing certificate...
	I1222 23:55:46.804998  617786 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem
	I1222 23:55:46.805015  617786 main.go:144] libmachine: Decoding PEM data...
	I1222 23:55:46.805027  617786 main.go:144] libmachine: Parsing certificate...
	I1222 23:55:46.805334  617786 cli_runner.go:164] Run: docker network inspect custom-flannel-003676 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1222 23:55:46.821577  617786 cli_runner.go:211] docker network inspect custom-flannel-003676 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1222 23:55:46.821674  617786 network_create.go:284] running [docker network inspect custom-flannel-003676] to gather additional debugging logs...
	I1222 23:55:46.821696  617786 cli_runner.go:164] Run: docker network inspect custom-flannel-003676
	W1222 23:55:46.838298  617786 cli_runner.go:211] docker network inspect custom-flannel-003676 returned with exit code 1
	I1222 23:55:46.838344  617786 network_create.go:287] error running [docker network inspect custom-flannel-003676]: docker network inspect custom-flannel-003676: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network custom-flannel-003676 not found
	I1222 23:55:46.838363  617786 network_create.go:289] output of [docker network inspect custom-flannel-003676]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network custom-flannel-003676 not found
	
	** /stderr **
	I1222 23:55:46.838500  617786 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 23:55:46.856294  617786 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6d900dc18f14 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3e:30:89:aa:a7:2c} reservation:<nil>}
	I1222 23:55:46.857075  617786 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-52673d7f67eb IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:86:59:44:a0:0a:fb} reservation:<nil>}
	I1222 23:55:46.857895  617786 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f98da515e43c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f6:95:d8:50:7b:ba} reservation:<nil>}
	I1222 23:55:46.858816  617786 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-5f6692e5184d IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ca:79:d3:b1:de:45} reservation:<nil>}
	I1222 23:55:46.859835  617786 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001eadb50}
	I1222 23:55:46.859866  617786 network_create.go:124] attempt to create docker network custom-flannel-003676 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1222 23:55:46.859924  617786 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-003676 custom-flannel-003676
	I1222 23:55:46.906706  617786 network_create.go:108] docker network custom-flannel-003676 192.168.85.0/24 created
	I1222 23:55:46.906738  617786 kic.go:121] calculated static IP "192.168.85.2" for the "custom-flannel-003676" container
	I1222 23:55:46.906795  617786 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1222 23:55:46.923614  617786 cli_runner.go:164] Run: docker volume create custom-flannel-003676 --label name.minikube.sigs.k8s.io=custom-flannel-003676 --label created_by.minikube.sigs.k8s.io=true
	I1222 23:55:46.940826  617786 oci.go:103] Successfully created a docker volume custom-flannel-003676
	I1222 23:55:46.940922  617786 cli_runner.go:164] Run: docker run --rm --name custom-flannel-003676-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-003676 --entrypoint /usr/bin/test -v custom-flannel-003676:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 -d /var/lib
	I1222 23:55:47.311262  617786 oci.go:107] Successfully prepared a docker volume custom-flannel-003676
	I1222 23:55:47.311344  617786 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime docker
	I1222 23:55:47.311359  617786 kic.go:194] Starting extracting preloaded images to volume ...
	I1222 23:55:47.311410  617786 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-flannel-003676:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 -I lz4 -xf /preloaded.tar -C /extractDir
	I1222 23:55:50.658660  617786 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-flannel-003676:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 -I lz4 -xf /preloaded.tar -C /extractDir: (3.34718915s)
	I1222 23:55:50.658698  617786 kic.go:203] duration metric: took 3.347334218s to extract preloaded images to volume ...
	W1222 23:55:50.658871  617786 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1222 23:55:50.658965  617786 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1222 23:55:50.711173  617786 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-flannel-003676 --name custom-flannel-003676 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-003676 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-flannel-003676 --network custom-flannel-003676 --ip 192.168.85.2 --volume custom-flannel-003676:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484
	I1222 23:55:50.979790  617786 cli_runner.go:164] Run: docker container inspect custom-flannel-003676 --format={{.State.Running}}
	I1222 23:55:50.999185  617786 cli_runner.go:164] Run: docker container inspect custom-flannel-003676 --format={{.State.Status}}
	I1222 23:55:51.017748  617786 cli_runner.go:164] Run: docker exec custom-flannel-003676 stat /var/lib/dpkg/alternatives/iptables
	I1222 23:55:51.063896  617786 oci.go:144] the created container "custom-flannel-003676" has a running status.
	I1222 23:55:51.063930  617786 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22301-72233/.minikube/machines/custom-flannel-003676/id_rsa...
	I1222 23:55:51.079018  617786 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22301-72233/.minikube/machines/custom-flannel-003676/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1222 23:55:51.102989  617786 cli_runner.go:164] Run: docker container inspect custom-flannel-003676 --format={{.State.Status}}
	I1222 23:55:51.121171  617786 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1222 23:55:51.121191  617786 kic_runner.go:114] Args: [docker exec --privileged custom-flannel-003676 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1222 23:55:51.174701  617786 cli_runner.go:164] Run: docker container inspect custom-flannel-003676 --format={{.State.Status}}
	I1222 23:55:51.192799  617786 machine.go:94] provisionDockerMachine start ...
	I1222 23:55:51.192885  617786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-003676
	I1222 23:55:51.210561  617786 main.go:144] libmachine: Using SSH client type: native
	I1222 23:55:51.210870  617786 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1222 23:55:51.210886  617786 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 23:55:51.211519  617786 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43326->127.0.0.1:33133: read: connection reset by peer
	I1222 23:55:54.355730  617786 main.go:144] libmachine: SSH cmd err, output: <nil>: custom-flannel-003676
	
	I1222 23:55:54.355757  617786 ubuntu.go:182] provisioning hostname "custom-flannel-003676"
	I1222 23:55:54.355812  617786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-003676
	I1222 23:55:54.372970  617786 main.go:144] libmachine: Using SSH client type: native
	I1222 23:55:54.373181  617786 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1222 23:55:54.373193  617786 main.go:144] libmachine: About to run SSH command:
	sudo hostname custom-flannel-003676 && echo "custom-flannel-003676" | sudo tee /etc/hostname
	I1222 23:55:54.527511  617786 main.go:144] libmachine: SSH cmd err, output: <nil>: custom-flannel-003676
	
	I1222 23:55:54.527702  617786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-003676
	I1222 23:55:54.551511  617786 main.go:144] libmachine: Using SSH client type: native
	I1222 23:55:54.551774  617786 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1222 23:55:54.551793  617786 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-003676' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-003676/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-003676' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 23:55:54.694300  617786 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 23:55:54.694336  617786 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22301-72233/.minikube CaCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22301-72233/.minikube}
	I1222 23:55:54.694364  617786 ubuntu.go:190] setting up certificates
	I1222 23:55:54.694375  617786 provision.go:84] configureAuth start
	I1222 23:55:54.694436  617786 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-003676
	I1222 23:55:54.712559  617786 provision.go:143] copyHostCerts
	I1222 23:55:54.712639  617786 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem, removing ...
	I1222 23:55:54.712651  617786 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem
	I1222 23:55:54.712719  617786 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem (1082 bytes)
	I1222 23:55:54.712827  617786 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem, removing ...
	I1222 23:55:54.712836  617786 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem
	I1222 23:55:54.712866  617786 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem (1123 bytes)
	I1222 23:55:54.713362  617786 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem, removing ...
	I1222 23:55:54.713385  617786 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem
	I1222 23:55:54.713469  617786 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem (1679 bytes)
	I1222 23:55:54.713581  617786 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem org=jenkins.custom-flannel-003676 san=[127.0.0.1 192.168.85.2 custom-flannel-003676 localhost minikube]
	I1222 23:55:54.768617  617786 provision.go:177] copyRemoteCerts
	I1222 23:55:54.768681  617786 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 23:55:54.768717  617786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-003676
	I1222 23:55:54.786163  617786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/custom-flannel-003676/id_rsa Username:docker}
	I1222 23:55:54.886919  617786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1222 23:55:54.905512  617786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 23:55:54.922394  617786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1222 23:55:54.939095  617786 provision.go:87] duration metric: took 244.702078ms to configureAuth
	I1222 23:55:54.939120  617786 ubuntu.go:206] setting minikube options for container-runtime
	I1222 23:55:54.939283  617786 config.go:182] Loaded profile config "custom-flannel-003676": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1222 23:55:54.939332  617786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-003676
	I1222 23:55:54.957246  617786 main.go:144] libmachine: Using SSH client type: native
	I1222 23:55:54.957505  617786 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1222 23:55:54.957527  617786 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1222 23:55:55.099443  617786 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1222 23:55:55.099467  617786 ubuntu.go:71] root file system type: overlay
	I1222 23:55:55.099587  617786 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1222 23:55:55.099692  617786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-003676
	I1222 23:55:55.117630  617786 main.go:144] libmachine: Using SSH client type: native
	I1222 23:55:55.117841  617786 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1222 23:55:55.117898  617786 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1222 23:55:55.277783  617786 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1222 23:55:55.277872  617786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-003676
	I1222 23:55:55.301900  617786 main.go:144] libmachine: Using SSH client type: native
	I1222 23:55:55.302116  617786 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1222 23:55:55.302133  617786 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1222 23:55:56.409128  617786 main.go:144] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-12 14:48:15.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-22 23:55:55.275425247 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1222 23:55:56.409156  617786 machine.go:97] duration metric: took 5.216336629s to provisionDockerMachine
	I1222 23:55:56.409169  617786 client.go:176] duration metric: took 9.604338112s to LocalClient.Create
	I1222 23:55:56.409186  617786 start.go:167] duration metric: took 9.604393704s to libmachine.API.Create "custom-flannel-003676"
	I1222 23:55:56.409194  617786 start.go:293] postStartSetup for "custom-flannel-003676" (driver="docker")
	I1222 23:55:56.409208  617786 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 23:55:56.409264  617786 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 23:55:56.409320  617786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-003676
	I1222 23:55:56.427275  617786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/custom-flannel-003676/id_rsa Username:docker}
	I1222 23:55:56.530703  617786 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 23:55:56.534170  617786 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 23:55:56.534200  617786 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 23:55:56.534214  617786 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-72233/.minikube/addons for local assets ...
	I1222 23:55:56.534258  617786 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-72233/.minikube/files for local assets ...
	I1222 23:55:56.534337  617786 filesync.go:149] local asset: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem -> 758032.pem in /etc/ssl/certs
	I1222 23:55:56.534478  617786 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1222 23:55:56.541890  617786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem --> /etc/ssl/certs/758032.pem (1708 bytes)
	I1222 23:55:56.561410  617786 start.go:296] duration metric: took 152.198906ms for postStartSetup
	I1222 23:55:56.561813  617786 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-003676
	I1222 23:55:56.579647  617786 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/custom-flannel-003676/config.json ...
	I1222 23:55:56.579886  617786 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 23:55:56.579927  617786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-003676
	I1222 23:55:56.596676  617786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/custom-flannel-003676/id_rsa Username:docker}
	I1222 23:55:56.694834  617786 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 23:55:56.699456  617786 start.go:128] duration metric: took 9.896635585s to createHost
	I1222 23:55:56.699477  617786 start.go:83] releasing machines lock for "custom-flannel-003676", held for 9.896751384s
	I1222 23:55:56.699537  617786 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-003676
	I1222 23:55:56.717512  617786 ssh_runner.go:195] Run: cat /version.json
	I1222 23:55:56.717557  617786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-003676
	I1222 23:55:56.717583  617786 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 23:55:56.717679  617786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-003676
	I1222 23:55:56.736420  617786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/custom-flannel-003676/id_rsa Username:docker}
	I1222 23:55:56.736692  617786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/custom-flannel-003676/id_rsa Username:docker}
	I1222 23:55:56.893316  617786 ssh_runner.go:195] Run: systemctl --version
	I1222 23:55:56.899902  617786 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 23:55:56.904458  617786 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 23:55:56.904512  617786 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 23:55:56.928428  617786 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1222 23:55:56.928459  617786 start.go:496] detecting cgroup driver to use...
	I1222 23:55:56.928489  617786 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 23:55:56.928699  617786 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 23:55:56.942135  617786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1222 23:55:56.951657  617786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1222 23:55:56.960648  617786 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1222 23:55:56.960714  617786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1222 23:55:56.969269  617786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 23:55:56.977545  617786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1222 23:55:56.985768  617786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 23:55:56.994002  617786 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 23:55:57.002057  617786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1222 23:55:57.010781  617786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1222 23:55:57.019338  617786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1222 23:55:57.028033  617786 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 23:55:57.035330  617786 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 23:55:57.042649  617786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 23:55:57.121869  617786 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1222 23:55:57.196112  617786 start.go:496] detecting cgroup driver to use...
	I1222 23:55:57.196167  617786 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 23:55:57.196224  617786 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1222 23:55:57.209418  617786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 23:55:57.220974  617786 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1222 23:55:57.241009  617786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 23:55:57.253067  617786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1222 23:55:57.264991  617786 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 23:55:57.279347  617786 ssh_runner.go:195] Run: which cri-dockerd
	I1222 23:55:57.282921  617786 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1222 23:55:57.291619  617786 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1222 23:55:57.303784  617786 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1222 23:55:57.387382  617786 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1222 23:55:57.470086  617786 docker.go:578] configuring docker to use "cgroupfs" as cgroup driver...
	I1222 23:55:57.470203  617786 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1222 23:55:57.483581  617786 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1222 23:55:57.496273  617786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 23:55:57.593313  617786 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1222 23:55:58.292981  617786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 23:55:58.307218  617786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1222 23:55:58.320346  617786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1222 23:55:58.333061  617786 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1222 23:55:58.414894  617786 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1222 23:55:58.497382  617786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 23:55:58.579331  617786 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1222 23:55:58.603946  617786 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1222 23:55:58.615842  617786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 23:55:58.697639  617786 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1222 23:55:58.767556  617786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1222 23:55:58.781026  617786 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1222 23:55:58.781090  617786 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1222 23:55:58.784816  617786 start.go:564] Will wait 60s for crictl version
	I1222 23:55:58.784862  617786 ssh_runner.go:195] Run: which crictl
	I1222 23:55:58.788184  617786 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 23:55:58.811914  617786 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1222 23:55:58.811973  617786 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1222 23:55:58.838324  617786 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1222 23:55:58.864736  617786 out.go:252] * Preparing Kubernetes v1.34.3 on Docker 29.1.3 ...
	I1222 23:55:58.864820  617786 cli_runner.go:164] Run: docker network inspect custom-flannel-003676 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 23:55:58.881959  617786 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1222 23:55:58.886132  617786 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 23:55:58.896141  617786 kubeadm.go:884] updating cluster {Name:custom-flannel-003676 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:custom-flannel-003676 Namespace:default APIServerHAVIP: APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableC
oreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1222 23:55:58.896283  617786 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime docker
	I1222 23:55:58.896345  617786 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1222 23:55:58.917158  617786 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.3
	registry.k8s.io/kube-controller-manager:v1.34.3
	registry.k8s.io/kube-scheduler:v1.34.3
	registry.k8s.io/kube-proxy:v1.34.3
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1222 23:55:58.917179  617786 docker.go:624] Images already preloaded, skipping extraction
	I1222 23:55:58.917243  617786 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1222 23:55:58.937403  617786 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.3
	registry.k8s.io/kube-scheduler:v1.34.3
	registry.k8s.io/kube-controller-manager:v1.34.3
	registry.k8s.io/kube-proxy:v1.34.3
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1222 23:55:58.937425  617786 cache_images.go:86] Images are preloaded, skipping loading
	I1222 23:55:58.937434  617786 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.3 docker true true} ...
	I1222 23:55:58.937522  617786 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=custom-flannel-003676 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:custom-flannel-003676 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml}
	I1222 23:55:58.937571  617786 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1222 23:55:58.986173  617786 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1222 23:55:58.986220  617786 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 23:55:58.986252  617786 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-flannel-003676 NodeName:custom-flannel-003676 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 23:55:58.986425  617786 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "custom-flannel-003676"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 23:55:58.986507  617786 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1222 23:55:58.995856  617786 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 23:55:58.995931  617786 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 23:55:59.005524  617786 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I1222 23:55:59.021552  617786 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1222 23:55:59.036632  617786 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1222 23:55:59.052939  617786 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1222 23:55:59.056771  617786 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 23:55:59.066451  617786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 23:55:59.149251  617786 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 23:55:59.172870  617786 certs.go:69] Setting up /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/custom-flannel-003676 for IP: 192.168.85.2
	I1222 23:55:59.172894  617786 certs.go:195] generating shared ca certs ...
	I1222 23:55:59.172915  617786 certs.go:227] acquiring lock for ca certs: {Name:mk952cc8302daab7c0050aedd5db4002f6808128 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 23:55:59.173092  617786 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.key
	I1222 23:55:59.173146  617786 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.key
	I1222 23:55:59.173159  617786 certs.go:257] generating profile certs ...
	I1222 23:55:59.173225  617786 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/custom-flannel-003676/client.key
	I1222 23:55:59.173243  617786 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/custom-flannel-003676/client.crt with IP's: []
	I1222 23:55:59.329059  617786 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/custom-flannel-003676/client.crt ...
	I1222 23:55:59.329090  617786 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/custom-flannel-003676/client.crt: {Name:mk8a46e391dfffb9bf57cb2b54caef9738c0b355 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 23:55:59.329302  617786 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/custom-flannel-003676/client.key ...
	I1222 23:55:59.329322  617786 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/custom-flannel-003676/client.key: {Name:mk7b662bb40284f44ac950d8a0a716fccc240efa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 23:55:59.329454  617786 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/custom-flannel-003676/apiserver.key.1ccb3158
	I1222 23:55:59.329478  617786 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/custom-flannel-003676/apiserver.crt.1ccb3158 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1222 23:55:59.460546  617786 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/custom-flannel-003676/apiserver.crt.1ccb3158 ...
	I1222 23:55:59.460577  617786 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/custom-flannel-003676/apiserver.crt.1ccb3158: {Name:mk8b46914d5d74ac83896a7a7baac1a4f1f71f29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 23:55:59.460773  617786 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/custom-flannel-003676/apiserver.key.1ccb3158 ...
	I1222 23:55:59.460797  617786 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/custom-flannel-003676/apiserver.key.1ccb3158: {Name:mk92667d120eb9b8590c48721dfe919f52e7f1ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 23:55:59.460904  617786 certs.go:382] copying /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/custom-flannel-003676/apiserver.crt.1ccb3158 -> /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/custom-flannel-003676/apiserver.crt
	I1222 23:55:59.461014  617786 certs.go:386] copying /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/custom-flannel-003676/apiserver.key.1ccb3158 -> /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/custom-flannel-003676/apiserver.key
	I1222 23:55:59.461097  617786 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/custom-flannel-003676/proxy-client.key
	I1222 23:55:59.461115  617786 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/custom-flannel-003676/proxy-client.crt with IP's: []
	I1222 23:55:59.641997  617786 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/custom-flannel-003676/proxy-client.crt ...
	I1222 23:55:59.642028  617786 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/custom-flannel-003676/proxy-client.crt: {Name:mkb8c9feafe974b3a650e6c81c52822ef92f35c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 23:55:59.642228  617786 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/custom-flannel-003676/proxy-client.key ...
	I1222 23:55:59.642252  617786 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/custom-flannel-003676/proxy-client.key: {Name:mk761e57186ee8a3d2a409457872ff627b961211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 23:55:59.642496  617786 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803.pem (1338 bytes)
	W1222 23:55:59.642545  617786 certs.go:480] ignoring /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803_empty.pem, impossibly tiny 0 bytes
	I1222 23:55:59.642561  617786 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem (1675 bytes)
	I1222 23:55:59.642610  617786 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem (1082 bytes)
	I1222 23:55:59.642652  617786 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem (1123 bytes)
	I1222 23:55:59.642686  617786 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem (1679 bytes)
	I1222 23:55:59.642748  617786 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem (1708 bytes)
	I1222 23:55:59.643451  617786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 23:55:59.661505  617786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 23:55:59.678817  617786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 23:55:59.695910  617786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 23:55:59.712659  617786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/custom-flannel-003676/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1222 23:55:59.729893  617786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/custom-flannel-003676/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 23:55:59.748434  617786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/custom-flannel-003676/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 23:55:59.768398  617786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/custom-flannel-003676/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1222 23:55:59.790176  617786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem --> /usr/share/ca-certificates/758032.pem (1708 bytes)
	I1222 23:55:59.812370  617786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 23:55:59.829121  617786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803.pem --> /usr/share/ca-certificates/75803.pem (1338 bytes)
	I1222 23:55:59.846199  617786 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 23:55:59.858400  617786 ssh_runner.go:195] Run: openssl version
	I1222 23:55:59.864324  617786 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/758032.pem
	I1222 23:55:59.871207  617786 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/758032.pem /etc/ssl/certs/758032.pem
	I1222 23:55:59.878734  617786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/758032.pem
	I1222 23:55:59.882321  617786 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 22:42 /usr/share/ca-certificates/758032.pem
	I1222 23:55:59.882381  617786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/758032.pem
	I1222 23:55:59.916726  617786 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 23:55:59.924468  617786 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/758032.pem /etc/ssl/certs/3ec20f2e.0
	I1222 23:55:59.931857  617786 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 23:55:59.938988  617786 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 23:55:59.946063  617786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 23:55:59.949671  617786 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 22:33 /usr/share/ca-certificates/minikubeCA.pem
	I1222 23:55:59.949718  617786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 23:55:59.983201  617786 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 23:55:59.990815  617786 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1222 23:55:59.998580  617786 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/75803.pem
	I1222 23:56:00.006558  617786 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/75803.pem /etc/ssl/certs/75803.pem
	I1222 23:56:00.014039  617786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75803.pem
	I1222 23:56:00.017881  617786 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 22:42 /usr/share/ca-certificates/75803.pem
	I1222 23:56:00.017934  617786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75803.pem
	I1222 23:56:00.055029  617786 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 23:56:00.062705  617786 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/75803.pem /etc/ssl/certs/51391683.0
	I1222 23:56:00.069776  617786 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 23:56:00.073350  617786 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1222 23:56:00.073427  617786 kubeadm.go:401] StartCluster: {Name:custom-flannel-003676 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:custom-flannel-003676 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCore
DNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 23:56:00.073549  617786 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1222 23:56:00.093065  617786 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 23:56:00.100938  617786 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1222 23:56:00.108297  617786 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1222 23:56:00.108344  617786 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1222 23:56:00.116621  617786 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1222 23:56:00.116640  617786 kubeadm.go:158] found existing configuration files:
	
	I1222 23:56:00.116685  617786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1222 23:56:00.124351  617786 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1222 23:56:00.124397  617786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1222 23:56:00.131350  617786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1222 23:56:00.138565  617786 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1222 23:56:00.138626  617786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1222 23:56:00.145500  617786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1222 23:56:00.152665  617786 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1222 23:56:00.152711  617786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1222 23:56:00.160092  617786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1222 23:56:00.167406  617786 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1222 23:56:00.167457  617786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1222 23:56:00.174477  617786 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1222 23:56:00.211853  617786 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1222 23:56:00.211944  617786 kubeadm.go:319] [preflight] Running pre-flight checks
	I1222 23:56:00.233952  617786 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1222 23:56:00.234084  617786 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1222 23:56:00.234153  617786 kubeadm.go:319] OS: Linux
	I1222 23:56:00.234219  617786 kubeadm.go:319] CGROUPS_CPU: enabled
	I1222 23:56:00.234287  617786 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1222 23:56:00.234380  617786 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1222 23:56:00.234461  617786 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1222 23:56:00.234541  617786 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1222 23:56:00.234651  617786 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1222 23:56:00.234719  617786 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1222 23:56:00.234762  617786 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1222 23:56:00.234831  617786 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1222 23:56:00.291766  617786 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1222 23:56:00.291911  617786 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1222 23:56:00.292026  617786 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1222 23:56:00.303236  617786 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1222 23:56:00.305253  617786 out.go:252]   - Generating certificates and keys ...
	I1222 23:56:00.305327  617786 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1222 23:56:00.305386  617786 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1222 23:56:00.399081  617786 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1222 23:56:00.533106  617786 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1222 23:56:00.681142  617786 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1222 23:56:00.725671  617786 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1222 23:56:01.490036  617786 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1222 23:56:01.490177  617786 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [custom-flannel-003676 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1222 23:56:01.541990  617786 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1222 23:56:01.542100  617786 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [custom-flannel-003676 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	
	
	==> Docker <==
	Dec 22 23:45:57 no-preload-063943 dockerd[1158]: time="2025-12-22T23:45:57.067952621Z" level=info msg="Restoring containers: start."
	Dec 22 23:45:57 no-preload-063943 dockerd[1158]: time="2025-12-22T23:45:57.087206382Z" level=info msg="Deleting nftables IPv4 rules" error="exit status 1"
	Dec 22 23:45:57 no-preload-063943 dockerd[1158]: time="2025-12-22T23:45:57.105276522Z" level=info msg="Deleting nftables IPv6 rules" error="exit status 1"
	Dec 22 23:45:57 no-preload-063943 dockerd[1158]: time="2025-12-22T23:45:57.608862562Z" level=info msg="Loading containers: done."
	Dec 22 23:45:57 no-preload-063943 dockerd[1158]: time="2025-12-22T23:45:57.622047370Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 22 23:45:57 no-preload-063943 dockerd[1158]: time="2025-12-22T23:45:57.622108955Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 22 23:45:57 no-preload-063943 dockerd[1158]: time="2025-12-22T23:45:57.622142828Z" level=info msg="Initializing buildkit"
	Dec 22 23:45:57 no-preload-063943 dockerd[1158]: time="2025-12-22T23:45:57.654446856Z" level=info msg="Completed buildkit initialization"
	Dec 22 23:45:57 no-preload-063943 dockerd[1158]: time="2025-12-22T23:45:57.660274243Z" level=info msg="Daemon has completed initialization"
	Dec 22 23:45:57 no-preload-063943 dockerd[1158]: time="2025-12-22T23:45:57.660348571Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 22 23:45:57 no-preload-063943 dockerd[1158]: time="2025-12-22T23:45:57.660422079Z" level=info msg="API listen on [::]:2376"
	Dec 22 23:45:57 no-preload-063943 dockerd[1158]: time="2025-12-22T23:45:57.660348899Z" level=info msg="API listen on /run/docker.sock"
	Dec 22 23:45:57 no-preload-063943 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 22 23:45:58 no-preload-063943 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 22 23:45:58 no-preload-063943 cri-dockerd[1447]: time="2025-12-22T23:45:58Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 22 23:45:58 no-preload-063943 cri-dockerd[1447]: time="2025-12-22T23:45:58Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 22 23:45:58 no-preload-063943 cri-dockerd[1447]: time="2025-12-22T23:45:58Z" level=info msg="Start docker client with request timeout 0s"
	Dec 22 23:45:58 no-preload-063943 cri-dockerd[1447]: time="2025-12-22T23:45:58Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 22 23:45:58 no-preload-063943 cri-dockerd[1447]: time="2025-12-22T23:45:58Z" level=info msg="Loaded network plugin cni"
	Dec 22 23:45:58 no-preload-063943 cri-dockerd[1447]: time="2025-12-22T23:45:58Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 22 23:45:58 no-preload-063943 cri-dockerd[1447]: time="2025-12-22T23:45:58Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 22 23:45:58 no-preload-063943 cri-dockerd[1447]: time="2025-12-22T23:45:58Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 22 23:45:58 no-preload-063943 cri-dockerd[1447]: time="2025-12-22T23:45:58Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 22 23:45:58 no-preload-063943 cri-dockerd[1447]: time="2025-12-22T23:45:58Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 22 23:45:58 no-preload-063943 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1222 23:56:04.776296   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:56:04.776829   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:56:04.778420   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:56:04.778868   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1222 23:56:04.780369   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff b2 e0 b3 e5 fd 05 08 06
	[Dec22 23:48] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a2 5e 7d 42 4c be 08 06
	[ +37.051500] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ee 9d 29 a8 c7 7e 08 06
	[  +0.046977] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3a 20 ef 34 9e ff 08 06
	[  +2.780094] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e 36 71 18 35 80 08 06
	[  +0.005286] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 7e 85 6b 14 50 db 08 06
	[Dec22 23:49] IPv4: martian source 10.244.0.1 from 10.244.0.7, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 3d 46 1b 4b 15 08 06
	[  +8.285809] IPv4: martian source 10.244.0.1 from 10.244.0.10, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 42 de e5 d5 d2 d6 08 06
	[Dec22 23:50] IPv4: martian source 10.244.0.1 from 10.244.0.8, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 9c 73 09 d8 3c 08 06
	[Dec22 23:51] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff fe dd 45 92 98 69 08 06
	[  +0.005109] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ee 3b 16 0e 30 fb 08 06
	[Dec22 23:52] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6e 26 d0 5e 2a 12 08 06
	[  +0.000315] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ee 3b 16 0e 30 fb 08 06
	
	
	==> kernel <==
	 23:56:04 up  3:38,  0 user,  load average: 0.81, 1.42, 1.56
	Linux no-preload-063943 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 22 23:56:01 no-preload-063943 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 23:56:01 no-preload-063943 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 468.
	Dec 22 23:56:01 no-preload-063943 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:56:01 no-preload-063943 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:56:02 no-preload-063943 kubelet[11789]: E1222 23:56:02.048267   11789 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 23:56:02 no-preload-063943 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 23:56:02 no-preload-063943 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 23:56:02 no-preload-063943 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 469.
	Dec 22 23:56:02 no-preload-063943 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:56:02 no-preload-063943 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:56:02 no-preload-063943 kubelet[11799]: E1222 23:56:02.802460   11799 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 23:56:02 no-preload-063943 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 23:56:02 no-preload-063943 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 23:56:03 no-preload-063943 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 470.
	Dec 22 23:56:03 no-preload-063943 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:56:03 no-preload-063943 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:56:03 no-preload-063943 kubelet[11811]: E1222 23:56:03.544133   11811 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 23:56:03 no-preload-063943 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 23:56:03 no-preload-063943 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 22 23:56:04 no-preload-063943 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 471.
	Dec 22 23:56:04 no-preload-063943 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:56:04 no-preload-063943 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 22 23:56:04 no-preload-063943 kubelet[11846]: E1222 23:56:04.294683   11846 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 22 23:56:04 no-preload-063943 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 22 23:56:04 no-preload-063943 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-063943 -n no-preload-063943
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-063943 -n no-preload-063943: exit status 6 (317.467844ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1222 23:56:05.164612  621899 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-063943" does not appear in /home/jenkins/minikube-integration/22301-72233/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-063943" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (110.79s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (370.7s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-063943 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-rc.1
E1222 23:56:30.659698   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-580825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p no-preload-063943 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-rc.1: exit status 80 (6m9.279793472s)

                                                
                                                
-- stdout --
	* [no-preload-063943] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22301
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22301-72233/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-72233/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "no-preload-063943" primary control-plane node in "no-preload-063943" cluster
	* Pulling base image v0.0.48-1766394456-22288 ...
	* Verifying Kubernetes components...
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image registry.k8s.io/echoserver:1.4
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1222 23:56:06.787812  622784 out.go:360] Setting OutFile to fd 1 ...
	I1222 23:56:06.787957  622784 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 23:56:06.787968  622784 out.go:374] Setting ErrFile to fd 2...
	I1222 23:56:06.787975  622784 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 23:56:06.788198  622784 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-72233/.minikube/bin
	I1222 23:56:06.788690  622784 out.go:368] Setting JSON to false
	I1222 23:56:06.789869  622784 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":13107,"bootTime":1766434660,"procs":288,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1222 23:56:06.789927  622784 start.go:143] virtualization: kvm guest
	I1222 23:56:06.791916  622784 out.go:179] * [no-preload-063943] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1222 23:56:06.793066  622784 out.go:179]   - MINIKUBE_LOCATION=22301
	I1222 23:56:06.793070  622784 notify.go:221] Checking for updates...
	I1222 23:56:06.795123  622784 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 23:56:06.796547  622784 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1222 23:56:06.797830  622784 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-72233/.minikube
	I1222 23:56:06.798963  622784 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1222 23:56:06.799929  622784 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 23:56:06.801301  622784 config.go:182] Loaded profile config "no-preload-063943": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1222 23:56:06.801991  622784 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 23:56:06.829291  622784 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1222 23:56:06.829446  622784 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 23:56:06.883810  622784 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:true NGoroutines:75 SystemTime:2025-12-22 23:56:06.874445764 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1222 23:56:06.883920  622784 docker.go:319] overlay module found
	I1222 23:56:06.889706  622784 out.go:179] * Using the docker driver based on existing profile
	I1222 23:56:06.890811  622784 start.go:309] selected driver: docker
	I1222 23:56:06.890825  622784 start.go:928] validating driver "docker" against &{Name:no-preload-063943 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-063943 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 23:56:06.890942  622784 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 23:56:06.891846  622784 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 23:56:06.949722  622784 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:true NGoroutines:75 SystemTime:2025-12-22 23:56:06.939639835 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1222 23:56:06.950105  622784 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1222 23:56:06.950134  622784 cni.go:84] Creating CNI manager for ""
	I1222 23:56:06.950197  622784 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1222 23:56:06.950233  622784 start.go:353] cluster config:
	{Name:no-preload-063943 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-063943 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 23:56:06.952215  622784 out.go:179] * Starting "no-preload-063943" primary control-plane node in "no-preload-063943" cluster
	I1222 23:56:06.953238  622784 cache.go:134] Beginning downloading kic base image for docker with docker
	I1222 23:56:06.954402  622784 out.go:179] * Pulling base image v0.0.48-1766394456-22288 ...
	I1222 23:56:06.955517  622784 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1222 23:56:06.955632  622784 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local docker daemon
	I1222 23:56:06.955700  622784 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/no-preload-063943/config.json ...
	I1222 23:56:06.955876  622784 cache.go:107] acquiring lock: {Name:mka2a7cd00c9ee09fcd67b9fe2b1b7736241aafe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 23:56:06.955888  622784 cache.go:107] acquiring lock: {Name:mk804c5f94e18a50ea710125b603ced35b076f37 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 23:56:06.955908  622784 cache.go:107] acquiring lock: {Name:mk0f5262807fb5404c75ce06ce5720befe312fb6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 23:56:06.955959  622784 cache.go:107] acquiring lock: {Name:mk6f1235a31bde2512aad5a6083026ce14993945 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 23:56:06.956006  622784 cache.go:115] /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1222 23:56:06.955994  622784 cache.go:107] acquiring lock: {Name:mk62dd6bd5d525d245831d36e2e60bb4a4c91eaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 23:56:06.955983  622784 cache.go:107] acquiring lock: {Name:mk7ec73f502e042fe14942dd4168f5178dfa9f1e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 23:56:06.956016  622784 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 155.603µs
	I1222 23:56:06.956034  622784 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1222 23:56:06.956023  622784 cache.go:107] acquiring lock: {Name:mk06ea69d634e565762c96598011d1945c901ed0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 23:56:06.956055  622784 cache.go:115] /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 exists
	I1222 23:56:06.956079  622784 cache.go:115] /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1222 23:56:06.956082  622784 cache.go:115] /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1222 23:56:06.956091  622784 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 234.014µs
	I1222 23:56:06.956094  622784 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 106.055µs
	I1222 23:56:06.956010  622784 cache.go:115] /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1222 23:56:06.956101  622784 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1222 23:56:06.956078  622784 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0" took 120.906µs
	I1222 23:56:06.956110  622784 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1222 23:56:06.956103  622784 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1222 23:56:06.955994  622784 cache.go:107] acquiring lock: {Name:mk02371428913253d2a19c8c9a792727a5cd8caa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 23:56:06.956125  622784 cache.go:115] /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1222 23:56:06.956131  622784 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 204.094µs
	I1222 23:56:06.956141  622784 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1222 23:56:06.956109  622784 cache.go:115] /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1222 23:56:06.956149  622784 cache.go:115] /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1222 23:56:06.956155  622784 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 133.189µs
	I1222 23:56:06.956156  622784 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 165.223µs
	I1222 23:56:06.956166  622784 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1222 23:56:06.956166  622784 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1222 23:56:06.956157  622784 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 213.023µs
	I1222 23:56:06.956199  622784 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22301-72233/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1222 23:56:06.956215  622784 cache.go:87] Successfully saved all images to host disk.
	I1222 23:56:06.977510  622784 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local docker daemon, skipping pull
	I1222 23:56:06.977529  622784 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 exists in daemon, skipping load
	I1222 23:56:06.977556  622784 cache.go:243] Successfully downloaded all kic artifacts
	I1222 23:56:06.977588  622784 start.go:360] acquireMachinesLock for no-preload-063943: {Name:mke00101a1e3840a843a95b7b058ed2d434978f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1222 23:56:06.977668  622784 start.go:364] duration metric: took 39.489µs to acquireMachinesLock for "no-preload-063943"
	I1222 23:56:06.977686  622784 start.go:96] Skipping create...Using existing machine configuration
	I1222 23:56:06.977693  622784 fix.go:54] fixHost starting: 
	I1222 23:56:06.977891  622784 cli_runner.go:164] Run: docker container inspect no-preload-063943 --format={{.State.Status}}
	I1222 23:56:06.996235  622784 fix.go:112] recreateIfNeeded on no-preload-063943: state=Stopped err=<nil>
	W1222 23:56:06.996269  622784 fix.go:138] unexpected machine state, will restart: <nil>
	I1222 23:56:06.998227  622784 out.go:252] * Restarting existing docker container for "no-preload-063943" ...
	I1222 23:56:06.998315  622784 cli_runner.go:164] Run: docker start no-preload-063943
	I1222 23:56:07.267708  622784 cli_runner.go:164] Run: docker container inspect no-preload-063943 --format={{.State.Status}}
	I1222 23:56:07.293171  622784 kic.go:430] container "no-preload-063943" state is running.
	I1222 23:56:07.293722  622784 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-063943
	I1222 23:56:07.316033  622784 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/no-preload-063943/config.json ...
	I1222 23:56:07.316314  622784 machine.go:94] provisionDockerMachine start ...
	I1222 23:56:07.316411  622784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-063943
	I1222 23:56:07.341250  622784 main.go:144] libmachine: Using SSH client type: native
	I1222 23:56:07.341631  622784 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1222 23:56:07.341656  622784 main.go:144] libmachine: About to run SSH command:
	hostname
	I1222 23:56:07.342836  622784 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36610->127.0.0.1:33138: read: connection reset by peer
	I1222 23:56:10.487762  622784 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-063943
	
	I1222 23:56:10.487795  622784 ubuntu.go:182] provisioning hostname "no-preload-063943"
	I1222 23:56:10.487850  622784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-063943
	I1222 23:56:10.505616  622784 main.go:144] libmachine: Using SSH client type: native
	I1222 23:56:10.505838  622784 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1222 23:56:10.505851  622784 main.go:144] libmachine: About to run SSH command:
	sudo hostname no-preload-063943 && echo "no-preload-063943" | sudo tee /etc/hostname
	I1222 23:56:10.658160  622784 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-063943
	
	I1222 23:56:10.658231  622784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-063943
	I1222 23:56:10.675956  622784 main.go:144] libmachine: Using SSH client type: native
	I1222 23:56:10.676168  622784 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1222 23:56:10.676185  622784 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-063943' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-063943/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-063943' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1222 23:56:10.821341  622784 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 23:56:10.821371  622784 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22301-72233/.minikube CaCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22301-72233/.minikube}
	I1222 23:56:10.821395  622784 ubuntu.go:190] setting up certificates
	I1222 23:56:10.821408  622784 provision.go:84] configureAuth start
	I1222 23:56:10.821472  622784 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-063943
	I1222 23:56:10.838531  622784 provision.go:143] copyHostCerts
	I1222 23:56:10.838584  622784 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem, removing ...
	I1222 23:56:10.838614  622784 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem
	I1222 23:56:10.838691  622784 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem (1082 bytes)
	I1222 23:56:10.838798  622784 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem, removing ...
	I1222 23:56:10.838806  622784 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem
	I1222 23:56:10.838834  622784 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem (1123 bytes)
	I1222 23:56:10.838905  622784 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem, removing ...
	I1222 23:56:10.838913  622784 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem
	I1222 23:56:10.838938  622784 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem (1679 bytes)
	I1222 23:56:10.839052  622784 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem org=jenkins.no-preload-063943 san=[127.0.0.1 192.168.103.2 localhost minikube no-preload-063943]
	I1222 23:56:10.925520  622784 provision.go:177] copyRemoteCerts
	I1222 23:56:10.925580  622784 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1222 23:56:10.925637  622784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-063943
	I1222 23:56:10.946266  622784 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/no-preload-063943/id_rsa Username:docker}
	I1222 23:56:11.053523  622784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1222 23:56:11.071312  622784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1222 23:56:11.089514  622784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1222 23:56:11.106427  622784 provision.go:87] duration metric: took 285.002519ms to configureAuth
	I1222 23:56:11.106461  622784 ubuntu.go:206] setting minikube options for container-runtime
	I1222 23:56:11.106672  622784 config.go:182] Loaded profile config "no-preload-063943": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1222 23:56:11.106743  622784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-063943
	I1222 23:56:11.124909  622784 main.go:144] libmachine: Using SSH client type: native
	I1222 23:56:11.125186  622784 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1222 23:56:11.125199  622784 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1222 23:56:11.278611  622784 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1222 23:56:11.278644  622784 ubuntu.go:71] root file system type: overlay
	I1222 23:56:11.278779  622784 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1222 23:56:11.278840  622784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-063943
	I1222 23:56:11.300929  622784 main.go:144] libmachine: Using SSH client type: native
	I1222 23:56:11.301153  622784 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1222 23:56:11.301217  622784 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1222 23:56:11.463142  622784 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1222 23:56:11.463226  622784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-063943
	I1222 23:56:11.481470  622784 main.go:144] libmachine: Using SSH client type: native
	I1222 23:56:11.481715  622784 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1222 23:56:11.481734  622784 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1222 23:56:11.639310  622784 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1222 23:56:11.639342  622784 machine.go:97] duration metric: took 4.323005964s to provisionDockerMachine
	I1222 23:56:11.639356  622784 start.go:293] postStartSetup for "no-preload-063943" (driver="docker")
	I1222 23:56:11.639373  622784 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1222 23:56:11.639462  622784 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1222 23:56:11.639518  622784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-063943
	I1222 23:56:11.657756  622784 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/no-preload-063943/id_rsa Username:docker}
	I1222 23:56:11.760924  622784 ssh_runner.go:195] Run: cat /etc/os-release
	I1222 23:56:11.765207  622784 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1222 23:56:11.765242  622784 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1222 23:56:11.765256  622784 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-72233/.minikube/addons for local assets ...
	I1222 23:56:11.765321  622784 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-72233/.minikube/files for local assets ...
	I1222 23:56:11.765437  622784 filesync.go:149] local asset: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem -> 758032.pem in /etc/ssl/certs
	I1222 23:56:11.765558  622784 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1222 23:56:11.774504  622784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem --> /etc/ssl/certs/758032.pem (1708 bytes)
	I1222 23:56:11.796185  622784 start.go:296] duration metric: took 156.80897ms for postStartSetup
	I1222 23:56:11.796390  622784 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 23:56:11.796444  622784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-063943
	I1222 23:56:11.814486  622784 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/no-preload-063943/id_rsa Username:docker}
	I1222 23:56:11.914168  622784 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1222 23:56:11.919676  622784 fix.go:56] duration metric: took 4.941972273s for fixHost
	I1222 23:56:11.919706  622784 start.go:83] releasing machines lock for "no-preload-063943", held for 4.942026606s
	I1222 23:56:11.919783  622784 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-063943
	I1222 23:56:11.937547  622784 ssh_runner.go:195] Run: cat /version.json
	I1222 23:56:11.937625  622784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-063943
	I1222 23:56:11.937644  622784 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1222 23:56:11.937709  622784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-063943
	I1222 23:56:11.958084  622784 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/no-preload-063943/id_rsa Username:docker}
	I1222 23:56:11.958371  622784 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/no-preload-063943/id_rsa Username:docker}
	I1222 23:56:12.112211  622784 ssh_runner.go:195] Run: systemctl --version
	I1222 23:56:12.119070  622784 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1222 23:56:12.123978  622784 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1222 23:56:12.124040  622784 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1222 23:56:12.132370  622784 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1222 23:56:12.132398  622784 start.go:496] detecting cgroup driver to use...
	I1222 23:56:12.132446  622784 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 23:56:12.132557  622784 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 23:56:12.147969  622784 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1222 23:56:12.156989  622784 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1222 23:56:12.166284  622784 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1222 23:56:12.166336  622784 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1222 23:56:12.175243  622784 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 23:56:12.184752  622784 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1222 23:56:12.193320  622784 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1222 23:56:12.203027  622784 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1222 23:56:12.211365  622784 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1222 23:56:12.220477  622784 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1222 23:56:12.229435  622784 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1222 23:56:12.239534  622784 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1222 23:56:12.248781  622784 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1222 23:56:12.258007  622784 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 23:56:12.349279  622784 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1222 23:56:12.432760  622784 start.go:496] detecting cgroup driver to use...
	I1222 23:56:12.432818  622784 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1222 23:56:12.432866  622784 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1222 23:56:12.448246  622784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 23:56:12.463118  622784 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1222 23:56:12.487623  622784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1222 23:56:12.506873  622784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1222 23:56:12.529985  622784 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1222 23:56:12.552327  622784 ssh_runner.go:195] Run: which cri-dockerd
	I1222 23:56:12.558931  622784 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1222 23:56:12.568605  622784 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1222 23:56:12.581936  622784 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1222 23:56:12.677251  622784 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1222 23:56:12.764171  622784 docker.go:578] configuring docker to use "cgroupfs" as cgroup driver...
	I1222 23:56:12.764297  622784 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1222 23:56:12.778866  622784 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1222 23:56:12.791097  622784 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 23:56:12.873271  622784 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1222 23:56:13.647694  622784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1222 23:56:13.660319  622784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1222 23:56:13.672142  622784 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1222 23:56:13.686404  622784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1222 23:56:13.699284  622784 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1222 23:56:13.787110  622784 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1222 23:56:13.865157  622784 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 23:56:13.947288  622784 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1222 23:56:13.973228  622784 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1222 23:56:13.986491  622784 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 23:56:14.079336  622784 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1222 23:56:14.149938  622784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1222 23:56:14.163244  622784 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1222 23:56:14.163323  622784 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1222 23:56:14.167360  622784 start.go:564] Will wait 60s for crictl version
	I1222 23:56:14.167406  622784 ssh_runner.go:195] Run: which crictl
	I1222 23:56:14.170969  622784 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1222 23:56:14.196483  622784 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1222 23:56:14.196544  622784 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1222 23:56:14.225521  622784 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1222 23:56:14.252963  622784 out.go:252] * Preparing Kubernetes v1.35.0-rc.1 on Docker 29.1.3 ...
	I1222 23:56:14.253065  622784 cli_runner.go:164] Run: docker network inspect no-preload-063943 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1222 23:56:14.270342  622784 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1222 23:56:14.274551  622784 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 23:56:14.285856  622784 kubeadm.go:884] updating cluster {Name:no-preload-063943 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-063943 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1222 23:56:14.285966  622784 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1222 23:56:14.285994  622784 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1222 23:56:14.306257  622784 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1222 23:56:14.306282  622784 cache_images.go:86] Images are preloaded, skipping loading
	I1222 23:56:14.306289  622784 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.35.0-rc.1 docker true true} ...
	I1222 23:56:14.306405  622784 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-063943 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-063943 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1222 23:56:14.306471  622784 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1222 23:56:14.356990  622784 cni.go:84] Creating CNI manager for ""
	I1222 23:56:14.357021  622784 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1222 23:56:14.357039  622784 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1222 23:56:14.357066  622784 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-063943 NodeName:no-preload-063943 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1222 23:56:14.357229  622784 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "no-preload-063943"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1222 23:56:14.357308  622784 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1222 23:56:14.365667  622784 binaries.go:51] Found k8s binaries, skipping transfer
	I1222 23:56:14.365732  622784 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1222 23:56:14.373749  622784 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1222 23:56:14.387395  622784 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1222 23:56:14.400068  622784 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2226 bytes)
	I1222 23:56:14.412487  622784 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1222 23:56:14.416170  622784 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1222 23:56:14.426114  622784 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 23:56:14.509743  622784 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 23:56:14.540113  622784 certs.go:69] Setting up /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/no-preload-063943 for IP: 192.168.103.2
	I1222 23:56:14.540133  622784 certs.go:195] generating shared ca certs ...
	I1222 23:56:14.540148  622784 certs.go:227] acquiring lock for ca certs: {Name:mk952cc8302daab7c0050aedd5db4002f6808128 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 23:56:14.540299  622784 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.key
	I1222 23:56:14.540358  622784 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.key
	I1222 23:56:14.540369  622784 certs.go:257] generating profile certs ...
	I1222 23:56:14.540483  622784 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/no-preload-063943/client.key
	I1222 23:56:14.540545  622784 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/no-preload-063943/apiserver.key.9af7d729
	I1222 23:56:14.540617  622784 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/no-preload-063943/proxy-client.key
	I1222 23:56:14.540787  622784 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803.pem (1338 bytes)
	W1222 23:56:14.540833  622784 certs.go:480] ignoring /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803_empty.pem, impossibly tiny 0 bytes
	I1222 23:56:14.540848  622784 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem (1675 bytes)
	I1222 23:56:14.540883  622784 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem (1082 bytes)
	I1222 23:56:14.540913  622784 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem (1123 bytes)
	I1222 23:56:14.540961  622784 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem (1679 bytes)
	I1222 23:56:14.541019  622784 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem (1708 bytes)
	I1222 23:56:14.541682  622784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1222 23:56:14.561930  622784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1222 23:56:14.582882  622784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1222 23:56:14.602213  622784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1222 23:56:14.620069  622784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/no-preload-063943/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1222 23:56:14.637034  622784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/no-preload-063943/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1222 23:56:14.655477  622784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/no-preload-063943/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1222 23:56:14.673395  622784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/no-preload-063943/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1222 23:56:14.692559  622784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem --> /usr/share/ca-certificates/758032.pem (1708 bytes)
	I1222 23:56:14.711580  622784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1222 23:56:14.730313  622784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803.pem --> /usr/share/ca-certificates/75803.pem (1338 bytes)
	I1222 23:56:14.748699  622784 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1222 23:56:14.763148  622784 ssh_runner.go:195] Run: openssl version
	I1222 23:56:14.770288  622784 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/758032.pem
	I1222 23:56:14.779278  622784 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/758032.pem /etc/ssl/certs/758032.pem
	I1222 23:56:14.788538  622784 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/758032.pem
	I1222 23:56:14.793505  622784 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 22:42 /usr/share/ca-certificates/758032.pem
	I1222 23:56:14.793569  622784 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/758032.pem
	I1222 23:56:14.829348  622784 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1222 23:56:14.837844  622784 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1222 23:56:14.845566  622784 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1222 23:56:14.853331  622784 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1222 23:56:14.857200  622784 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 22:33 /usr/share/ca-certificates/minikubeCA.pem
	I1222 23:56:14.857245  622784 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1222 23:56:14.892797  622784 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1222 23:56:14.900664  622784 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/75803.pem
	I1222 23:56:14.908787  622784 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/75803.pem /etc/ssl/certs/75803.pem
	I1222 23:56:14.916129  622784 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75803.pem
	I1222 23:56:14.919627  622784 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 22:42 /usr/share/ca-certificates/75803.pem
	I1222 23:56:14.919685  622784 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75803.pem
	I1222 23:56:14.954538  622784 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1222 23:56:14.962583  622784 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1222 23:56:14.966525  622784 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1222 23:56:15.002367  622784 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1222 23:56:15.038250  622784 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1222 23:56:15.072963  622784 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1222 23:56:15.109074  622784 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1222 23:56:15.144870  622784 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1222 23:56:15.181063  622784 kubeadm.go:401] StartCluster: {Name:no-preload-063943 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-063943 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 23:56:15.181226  622784 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1222 23:56:15.202967  622784 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1222 23:56:15.212505  622784 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1222 23:56:15.212526  622784 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1222 23:56:15.212622  622784 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1222 23:56:15.221424  622784 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1222 23:56:15.221982  622784 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-063943" does not appear in /home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1222 23:56:15.222165  622784 kubeconfig.go:62] /home/jenkins/minikube-integration/22301-72233/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-063943" cluster setting kubeconfig missing "no-preload-063943" context setting]
	I1222 23:56:15.222734  622784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/kubeconfig: {Name:mkabb5ea92c3fe748f610038efb5c58128364c71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 23:56:15.224130  622784 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1222 23:56:15.232074  622784 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1222 23:56:15.232102  622784 kubeadm.go:602] duration metric: took 19.558ms to restartPrimaryControlPlane
	I1222 23:56:15.232112  622784 kubeadm.go:403] duration metric: took 51.063653ms to StartCluster
	I1222 23:56:15.232129  622784 settings.go:142] acquiring lock: {Name:mk05aa406dacdbba79fec0b7e7f355491ea46bf8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 23:56:15.232190  622784 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1222 23:56:15.233084  622784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/kubeconfig: {Name:mkabb5ea92c3fe748f610038efb5c58128364c71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 23:56:15.233330  622784 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1222 23:56:15.233429  622784 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1222 23:56:15.233510  622784 config.go:182] Loaded profile config "no-preload-063943": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1222 23:56:15.233543  622784 addons.go:70] Setting storage-provisioner=true in profile "no-preload-063943"
	I1222 23:56:15.233563  622784 addons.go:70] Setting default-storageclass=true in profile "no-preload-063943"
	I1222 23:56:15.233566  622784 addons.go:239] Setting addon storage-provisioner=true in "no-preload-063943"
	I1222 23:56:15.233566  622784 addons.go:70] Setting dashboard=true in profile "no-preload-063943"
	I1222 23:56:15.233576  622784 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-063943"
	I1222 23:56:15.233585  622784 addons.go:239] Setting addon dashboard=true in "no-preload-063943"
	W1222 23:56:15.233647  622784 addons.go:248] addon dashboard should already be in state true
	I1222 23:56:15.233687  622784 host.go:66] Checking if "no-preload-063943" exists ...
	I1222 23:56:15.233626  622784 host.go:66] Checking if "no-preload-063943" exists ...
	I1222 23:56:15.233903  622784 cli_runner.go:164] Run: docker container inspect no-preload-063943 --format={{.State.Status}}
	I1222 23:56:15.234253  622784 cli_runner.go:164] Run: docker container inspect no-preload-063943 --format={{.State.Status}}
	I1222 23:56:15.234253  622784 cli_runner.go:164] Run: docker container inspect no-preload-063943 --format={{.State.Status}}
	I1222 23:56:15.235759  622784 out.go:179] * Verifying Kubernetes components...
	I1222 23:56:15.236782  622784 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1222 23:56:15.256841  622784 addons.go:239] Setting addon default-storageclass=true in "no-preload-063943"
	I1222 23:56:15.256892  622784 host.go:66] Checking if "no-preload-063943" exists ...
	I1222 23:56:15.257253  622784 cli_runner.go:164] Run: docker container inspect no-preload-063943 --format={{.State.Status}}
	I1222 23:56:15.257984  622784 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1222 23:56:15.258712  622784 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1222 23:56:15.261263  622784 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1222 23:56:15.261324  622784 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 23:56:15.261338  622784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1222 23:56:15.261392  622784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-063943
	I1222 23:56:15.264710  622784 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1222 23:56:15.264734  622784 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1222 23:56:15.264793  622784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-063943
	I1222 23:56:15.289134  622784 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1222 23:56:15.289157  622784 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1222 23:56:15.289218  622784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-063943
	I1222 23:56:15.291102  622784 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/no-preload-063943/id_rsa Username:docker}
	I1222 23:56:15.298284  622784 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/no-preload-063943/id_rsa Username:docker}
	I1222 23:56:15.308959  622784 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/no-preload-063943/id_rsa Username:docker}
	I1222 23:56:15.382953  622784 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1222 23:56:15.435098  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 23:56:15.446519  622784 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1222 23:56:15.446552  622784 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1222 23:56:15.447890  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1222 23:56:15.461566  622784 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1222 23:56:15.461602  622784 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1222 23:56:15.475470  622784 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1222 23:56:15.475494  622784 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1222 23:56:15.537632  622784 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1222 23:56:15.537677  622784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1222 23:56:15.553483  622784 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1222 23:56:15.553507  622784 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1222 23:56:15.566207  622784 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1222 23:56:15.566236  622784 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1222 23:56:15.579179  622784 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1222 23:56:15.579199  622784 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1222 23:56:15.591978  622784 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1222 23:56:15.591998  622784 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1222 23:56:15.604482  622784 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 23:56:15.604506  622784 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1222 23:56:15.617227  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1222 23:56:15.996825  622784 node_ready.go:35] waiting up to 6m0s for node "no-preload-063943" to be "Ready" ...
	W1222 23:56:15.996928  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 23:56:15.996977  622784 retry.go:84] will retry after 100ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 23:56:15.997065  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 23:56:15.997221  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 23:56:16.138509  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 23:56:16.186213  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 23:56:16.195112  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 23:56:16.242963  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 23:56:16.321173  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 23:56:16.375911  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 23:56:16.552585  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1222 23:56:16.604411  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 23:56:16.607350  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 23:56:16.613504  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 23:56:16.667791  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 23:56:16.675579  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 23:56:17.119785  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1222 23:56:17.136415  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 23:56:17.192096  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 23:56:17.204255  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 23:56:17.272474  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 23:56:17.345915  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 23:56:17.799737  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 23:56:17.863818  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 23:56:17.997440  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1222 23:56:18.263178  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 23:56:18.321207  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 23:56:18.327549  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 23:56:18.389710  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 23:56:19.690562  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 23:56:19.743520  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 23:56:19.794721  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 23:56:19.847994  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 23:56:19.997559  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1222 23:56:20.181760  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 23:56:20.253051  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 23:56:20.763117  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 23:56:20.823326  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 23:56:21.723446  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 23:56:21.777119  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 23:56:21.997768  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1222 23:56:22.221157  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 23:56:22.278106  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 23:56:23.764343  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 23:56:23.827213  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 23:56:23.997943  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1222 23:56:24.175261  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 23:56:24.241378  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 23:56:24.725375  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 23:56:24.784907  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 23:56:26.497519  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1222 23:56:27.225966  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 23:56:27.282345  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 23:56:27.476666  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 23:56:27.545301  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 23:56:28.498053  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1222 23:56:28.518191  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 23:56:28.573974  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 23:56:30.498143  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1222 23:56:30.980635  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 23:56:31.040465  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 23:56:32.498413  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1222 23:56:34.868644  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 23:56:34.927187  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 23:56:34.927227  622784 retry.go:84] will retry after 7.7s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 23:56:34.997930  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1222 23:56:36.186448  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 23:56:36.239960  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 23:56:36.998264  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1222 23:56:39.156321  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 23:56:39.212416  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 23:56:39.498118  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:56:41.997460  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1222 23:56:42.633551  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 23:56:42.686061  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 23:56:42.686101  622784 retry.go:84] will retry after 21.6s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 23:56:43.997753  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:56:46.497541  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1222 23:56:47.074585  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 23:56:47.134741  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 23:56:47.134782  622784 retry.go:84] will retry after 10.8s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 23:56:48.498232  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1222 23:56:50.559326  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 23:56:50.622142  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 23:56:50.622190  622784 retry.go:84] will retry after 31.3s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 23:56:50.998060  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:56:52.998159  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:56:55.497429  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:56:57.498336  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1222 23:56:57.978795  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 23:56:58.057851  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 23:56:58.057903  622784 retry.go:84] will retry after 21.1s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 23:56:59.998170  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:57:02.498209  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1222 23:57:04.279434  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 23:57:04.354352  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 23:57:04.354394  622784 retry.go:84] will retry after 21.8s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 23:57:04.998224  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:57:07.497551  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:57:09.498073  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:57:11.498119  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:57:13.998131  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:57:16.498287  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:57:18.997622  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1222 23:57:19.146806  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 23:57:19.213552  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 23:57:19.213614  622784 retry.go:84] will retry after 42.8s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 23:57:20.998390  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1222 23:57:21.915777  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 23:57:21.979626  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 23:57:23.497676  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:57:25.498139  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1222 23:57:26.115199  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 23:57:26.175192  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1222 23:57:26.175248  622784 retry.go:84] will retry after 37s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 23:57:27.997997  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:57:29.998085  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:57:32.498245  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:57:34.998264  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:57:37.497882  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:57:39.498371  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:57:41.998202  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:57:43.998446  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:57:46.497408  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:57:48.498147  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:57:50.998227  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:57:53.498346  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:57:55.998216  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:57:58.497498  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:58:00.498039  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1222 23:58:02.043528  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1222 23:58:02.099496  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 23:58:02.099666  622784 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1222 23:58:02.122710  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1222 23:58:02.178277  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 23:58:02.178386  622784 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1222 23:58:02.997550  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1222 23:58:03.224920  622784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1222 23:58:03.280913  622784 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1222 23:58:03.281053  622784 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1222 23:58:03.282855  622784 out.go:179] * Enabled addons: 
	I1222 23:58:03.284082  622784 addons.go:530] duration metric: took 1m48.050662246s for enable addons: enabled=[]
	W1222 23:58:04.997931  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:58:07.497459  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:58:09.498317  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:58:11.997711  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:58:13.998159  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:58:15.998365  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:58:18.498172  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:58:20.997814  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:58:23.497722  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:58:25.497978  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:58:27.498441  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:58:29.998144  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:58:31.998267  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:58:33.998430  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:58:36.498262  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:58:38.498541  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:58:40.997513  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:58:42.998148  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:58:45.497681  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:58:47.997705  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:58:50.000093  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:58:52.498257  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:58:54.502780  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:58:56.997817  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:58:58.998157  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:59:01.498141  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:59:03.498218  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:59:05.998300  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:59:08.498162  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:59:10.998111  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:59:12.998574  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:59:15.498114  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:59:17.997423  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:59:19.998386  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:59:22.498166  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:59:24.997459  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:59:26.998051  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:59:29.497510  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:59:31.498368  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:59:33.998166  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:59:36.498247  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:59:38.998304  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:59:41.498146  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:59:43.997576  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:59:45.998174  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:59:48.498154  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:59:50.997781  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:59:53.497731  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:59:55.498167  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:59:57.998211  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1222 23:59:59.998287  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:02.498185  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:04.997792  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:06.997877  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:08.998254  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:11.498182  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:13.998171  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:15.998341  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:18.498068  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:20.997931  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:23.498144  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:25.997511  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:27.997800  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:29.998008  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:31.998314  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:34.497435  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:36.498048  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:38.997749  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:40.998211  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:43.498161  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:45.997960  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:47.998170  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:50.498122  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:52.997659  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:54.998340  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:57.497653  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:59.497943  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:01.997413  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:03.998103  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:06.498109  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:08.998157  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:11.498214  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:13.998099  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:16.497424  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:18.498157  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:20.998023  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:23.498226  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:25.998233  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:28.498149  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:30.998068  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:33.497906  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:35.997708  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:38.498282  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:40.998196  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:43.497482  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:45.497538  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:47.498181  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:49.998144  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:52.497841  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:54.498062  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:56.498119  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:58.498177  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:02:00.998033  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:02:03.497953  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:02:05.997506  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:02:07.998312  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:02:10.498076  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:02:12.997638  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:02:15.497547  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:02:15.997757  622784 node_ready.go:38] duration metric: took 6m0.000870759s for node "no-preload-063943" to be "Ready" ...
	I1223 00:02:15.999489  622784 out.go:203] 
	W1223 00:02:16.002745  622784 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1223 00:02:16.002767  622784 out.go:285] * 
	* 
	W1223 00:02:16.002971  622784 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1223 00:02:16.004060  622784 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p no-preload-063943 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-rc.1": exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-063943
helpers_test.go:244: (dbg) docker inspect no-preload-063943:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "786df4b777717287f11f0ef2eab8115dad6a21597d5995b3b84e35ed2328cebc",
	        "Created": "2025-12-22T23:45:49.557145486Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 622978,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T23:56:07.024549385Z",
	            "FinishedAt": "2025-12-22T23:56:05.577772514Z"
	        },
	        "Image": "sha256:9a87e850a5e640dd3e5f71477885272b970ba271e3722be8bebbe0157f704ffd",
	        "ResolvConfPath": "/var/lib/docker/containers/786df4b777717287f11f0ef2eab8115dad6a21597d5995b3b84e35ed2328cebc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/786df4b777717287f11f0ef2eab8115dad6a21597d5995b3b84e35ed2328cebc/hostname",
	        "HostsPath": "/var/lib/docker/containers/786df4b777717287f11f0ef2eab8115dad6a21597d5995b3b84e35ed2328cebc/hosts",
	        "LogPath": "/var/lib/docker/containers/786df4b777717287f11f0ef2eab8115dad6a21597d5995b3b84e35ed2328cebc/786df4b777717287f11f0ef2eab8115dad6a21597d5995b3b84e35ed2328cebc-json.log",
	        "Name": "/no-preload-063943",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-063943:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-063943",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "786df4b777717287f11f0ef2eab8115dad6a21597d5995b3b84e35ed2328cebc",
	                "LowerDir": "/var/lib/docker/overlay2/29902a9fc8792c76fa85dc5a0de0b07f3c2e185c6d971af2f6ebff298763d0a3-init/diff:/var/lib/docker/overlay2/c57dd1a41102d99c4ed6be3c60b871435428bd2cea6a3d8d172f0a67527ba009/diff",
	                "MergedDir": "/var/lib/docker/overlay2/29902a9fc8792c76fa85dc5a0de0b07f3c2e185c6d971af2f6ebff298763d0a3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/29902a9fc8792c76fa85dc5a0de0b07f3c2e185c6d971af2f6ebff298763d0a3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/29902a9fc8792c76fa85dc5a0de0b07f3c2e185c6d971af2f6ebff298763d0a3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-063943",
	                "Source": "/var/lib/docker/volumes/no-preload-063943/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-063943",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-063943",
	                "name.minikube.sigs.k8s.io": "no-preload-063943",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "e615544c2ed8dc279a0d7bd7031d234c4bd36d86ac886a8680dbb0ce786c6bb0",
	            "SandboxKey": "/var/run/docker/netns/e615544c2ed8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-063943": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6fe1a4d651e77a6056be2344adfa00e0a1474c8d315239814c9f2b4594dd53fd",
	                    "EndpointID": "77c3f5905f39c7f705fb61bcc99a23730dfec3ccc0be5afe97e05c39881c936c",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "b6:1b:7b:4d:bd:50",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-063943",
	                        "786df4b77771"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-063943 -n no-preload-063943
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-063943 -n no-preload-063943: exit status 2 (295.809218ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-063943 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────┬────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                      ARGS                                       │    PROFILE     │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────┼────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p kubenet-003676 sudo iptables -t nat -L -n -v                                 │ kubenet-003676 │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo systemctl status kubelet --all --full --no-pager         │ kubenet-003676 │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo systemctl cat kubelet --no-pager                         │ kubenet-003676 │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo journalctl -xeu kubelet --all --full --no-pager          │ kubenet-003676 │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo cat /etc/kubernetes/kubelet.conf                         │ kubenet-003676 │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo cat /var/lib/kubelet/config.yaml                         │ kubenet-003676 │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo systemctl status docker --all --full --no-pager          │ kubenet-003676 │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo systemctl cat docker --no-pager                          │ kubenet-003676 │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo cat /etc/docker/daemon.json                              │ kubenet-003676 │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo docker system info                                       │ kubenet-003676 │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo systemctl status cri-docker --all --full --no-pager      │ kubenet-003676 │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo systemctl cat cri-docker --no-pager                      │ kubenet-003676 │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ kubenet-003676 │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo cat /usr/lib/systemd/system/cri-docker.service           │ kubenet-003676 │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo cri-dockerd --version                                    │ kubenet-003676 │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo systemctl status containerd --all --full --no-pager      │ kubenet-003676 │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo systemctl cat containerd --no-pager                      │ kubenet-003676 │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo cat /lib/systemd/system/containerd.service               │ kubenet-003676 │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo cat /etc/containerd/config.toml                          │ kubenet-003676 │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo containerd config dump                                   │ kubenet-003676 │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo systemctl status crio --all --full --no-pager            │ kubenet-003676 │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │                     │
	│ ssh     │ -p kubenet-003676 sudo systemctl cat crio --no-pager                            │ kubenet-003676 │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;  │ kubenet-003676 │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo crio config                                              │ kubenet-003676 │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ delete  │ -p kubenet-003676                                                               │ kubenet-003676 │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────┴────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/23 00:00:34
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1223 00:00:34.066824  687772 out.go:360] Setting OutFile to fd 1 ...
	I1223 00:00:34.067051  687772 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1223 00:00:34.067058  687772 out.go:374] Setting ErrFile to fd 2...
	I1223 00:00:34.067063  687772 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1223 00:00:34.067257  687772 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-72233/.minikube/bin
	I1223 00:00:34.067701  687772 out.go:368] Setting JSON to false
	I1223 00:00:34.068753  687772 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":13374,"bootTime":1766434660,"procs":281,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1223 00:00:34.068805  687772 start.go:143] virtualization: kvm guest
	W1223 00:00:29.964565  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:31.965119  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:33.965297  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	I1223 00:00:34.070524  687772 out.go:179] * [newest-cni-348344] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1223 00:00:34.072192  687772 notify.go:221] Checking for updates...
	I1223 00:00:34.072201  687772 out.go:179]   - MINIKUBE_LOCATION=22301
	I1223 00:00:34.073912  687772 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1223 00:00:34.074996  687772 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1223 00:00:34.076047  687772 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-72233/.minikube
	I1223 00:00:34.077175  687772 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1223 00:00:34.078295  687772 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1223 00:00:34.079882  687772 config.go:182] Loaded profile config "newest-cni-348344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1223 00:00:34.080446  687772 driver.go:422] Setting default libvirt URI to qemu:///system
	I1223 00:00:34.106101  687772 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1223 00:00:34.106213  687772 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1223 00:00:34.161275  687772 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:64 SystemTime:2025-12-23 00:00:34.151129133 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1223 00:00:34.161373  687772 docker.go:319] overlay module found
	I1223 00:00:34.163775  687772 out.go:179] * Using the docker driver based on existing profile
	I1223 00:00:34.164711  687772 start.go:309] selected driver: docker
	I1223 00:00:34.164723  687772 start.go:928] validating driver "docker" against &{Name:newest-cni-348344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-348344 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1223 00:00:34.164829  687772 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1223 00:00:34.165640  687772 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1223 00:00:34.231050  687772 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:64 SystemTime:2025-12-23 00:00:34.221418272 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1223 00:00:34.231362  687772 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1223 00:00:34.231388  687772 cni.go:84] Creating CNI manager for ""
	I1223 00:00:34.231454  687772 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1223 00:00:34.231489  687772 start.go:353] cluster config:
	{Name:newest-cni-348344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-348344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1223 00:00:34.233188  687772 out.go:179] * Starting "newest-cni-348344" primary control-plane node in "newest-cni-348344" cluster
	I1223 00:00:34.234320  687772 cache.go:134] Beginning downloading kic base image for docker with docker
	I1223 00:00:34.235503  687772 out.go:179] * Pulling base image v0.0.48-1766394456-22288 ...
	I1223 00:00:34.236442  687772 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1223 00:00:34.236471  687772 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4
	I1223 00:00:34.236486  687772 cache.go:65] Caching tarball of preloaded images
	I1223 00:00:34.236540  687772 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local docker daemon
	I1223 00:00:34.236575  687772 preload.go:251] Found /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1223 00:00:34.236586  687772 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on docker
	I1223 00:00:34.236749  687772 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/config.json ...
	I1223 00:00:34.256540  687772 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local docker daemon, skipping pull
	I1223 00:00:34.256557  687772 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 exists in daemon, skipping load
	I1223 00:00:34.256572  687772 cache.go:243] Successfully downloaded all kic artifacts
	I1223 00:00:34.256623  687772 start.go:360] acquireMachinesLock for newest-cni-348344: {Name:mk26cd248e0bcd2d8f2e8a824868ba7de6c9c6f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1223 00:00:34.256695  687772 start.go:364] duration metric: took 39.918µs to acquireMachinesLock for "newest-cni-348344"
	I1223 00:00:34.256714  687772 start.go:96] Skipping create...Using existing machine configuration
	I1223 00:00:34.256719  687772 fix.go:54] fixHost starting: 
	I1223 00:00:34.256918  687772 cli_runner.go:164] Run: docker container inspect newest-cni-348344 --format={{.State.Status}}
	I1223 00:00:34.273955  687772 fix.go:112] recreateIfNeeded on newest-cni-348344: state=Stopped err=<nil>
	W1223 00:00:34.273978  687772 fix.go:138] unexpected machine state, will restart: <nil>
	W1223 00:00:31.998314  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:34.497435  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:36.498048  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:00:34.276035  687772 out.go:252] * Restarting existing docker container for "newest-cni-348344" ...
	I1223 00:00:34.276100  687772 cli_runner.go:164] Run: docker start newest-cni-348344
	I1223 00:00:34.520995  687772 cli_runner.go:164] Run: docker container inspect newest-cni-348344 --format={{.State.Status}}
	I1223 00:00:34.540249  687772 kic.go:430] container "newest-cni-348344" state is running.
	I1223 00:00:34.540736  687772 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-348344
	I1223 00:00:34.560319  687772 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/config.json ...
	I1223 00:00:34.560718  687772 machine.go:94] provisionDockerMachine start ...
	I1223 00:00:34.560825  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:34.581907  687772 main.go:144] libmachine: Using SSH client type: native
	I1223 00:00:34.582194  687772 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33168 <nil> <nil>}
	I1223 00:00:34.582211  687772 main.go:144] libmachine: About to run SSH command:
	hostname
	I1223 00:00:34.583095  687772 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41172->127.0.0.1:33168: read: connection reset by peer
	I1223 00:00:37.726578  687772 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-348344
	
	I1223 00:00:37.726621  687772 ubuntu.go:182] provisioning hostname "newest-cni-348344"
	I1223 00:00:37.726764  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:37.746947  687772 main.go:144] libmachine: Using SSH client type: native
	I1223 00:00:37.747183  687772 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33168 <nil> <nil>}
	I1223 00:00:37.747203  687772 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-348344 && echo "newest-cni-348344" | sudo tee /etc/hostname
	I1223 00:00:37.900818  687772 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-348344
	
	I1223 00:00:37.900900  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:37.919317  687772 main.go:144] libmachine: Using SSH client type: native
	I1223 00:00:37.919561  687772 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33168 <nil> <nil>}
	I1223 00:00:37.919579  687772 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-348344' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-348344/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-348344' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1223 00:00:38.062239  687772 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1223 00:00:38.062284  687772 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22301-72233/.minikube CaCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22301-72233/.minikube}
	I1223 00:00:38.062331  687772 ubuntu.go:190] setting up certificates
	I1223 00:00:38.062344  687772 provision.go:84] configureAuth start
	I1223 00:00:38.062400  687772 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-348344
	I1223 00:00:38.081263  687772 provision.go:143] copyHostCerts
	I1223 00:00:38.081355  687772 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem, removing ...
	I1223 00:00:38.081386  687772 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem
	I1223 00:00:38.081497  687772 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem (1082 bytes)
	I1223 00:00:38.081760  687772 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem, removing ...
	I1223 00:00:38.081785  687772 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem
	I1223 00:00:38.081851  687772 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem (1123 bytes)
	I1223 00:00:38.082007  687772 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem, removing ...
	I1223 00:00:38.082027  687772 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem
	I1223 00:00:38.082101  687772 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem (1679 bytes)
	I1223 00:00:38.082238  687772 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem org=jenkins.newest-cni-348344 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-348344]
	I1223 00:00:38.170695  687772 provision.go:177] copyRemoteCerts
	I1223 00:00:38.170759  687772 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1223 00:00:38.170815  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:38.189123  687772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/newest-cni-348344/id_rsa Username:docker}
	I1223 00:00:38.291920  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1223 00:00:38.309166  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1223 00:00:38.326166  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1223 00:00:38.344564  687772 provision.go:87] duration metric: took 282.199681ms to configureAuth
	I1223 00:00:38.344627  687772 ubuntu.go:206] setting minikube options for container-runtime
	I1223 00:00:38.344911  687772 config.go:182] Loaded profile config "newest-cni-348344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1223 00:00:38.344995  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:38.366260  687772 main.go:144] libmachine: Using SSH client type: native
	I1223 00:00:38.366529  687772 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33168 <nil> <nil>}
	I1223 00:00:38.366545  687772 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1223 00:00:38.510728  687772 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1223 00:00:38.510754  687772 ubuntu.go:71] root file system type: overlay
	I1223 00:00:38.510908  687772 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1223 00:00:38.510979  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:38.532018  687772 main.go:144] libmachine: Using SSH client type: native
	I1223 00:00:38.532329  687772 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33168 <nil> <nil>}
	I1223 00:00:38.532458  687772 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1223 00:00:38.686369  687772 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1223 00:00:38.686443  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:38.704974  687772 main.go:144] libmachine: Using SSH client type: native
	I1223 00:00:38.705187  687772 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33168 <nil> <nil>}
	I1223 00:00:38.705204  687772 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1223 00:00:38.853249  687772 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1223 00:00:38.853285  687772 machine.go:97] duration metric: took 4.292539002s to provisionDockerMachine
	I1223 00:00:38.853303  687772 start.go:293] postStartSetup for "newest-cni-348344" (driver="docker")
	I1223 00:00:38.853321  687772 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1223 00:00:38.853419  687772 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1223 00:00:38.853487  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:38.872421  687772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/newest-cni-348344/id_rsa Username:docker}
	I1223 00:00:38.979494  687772 ssh_runner.go:195] Run: cat /etc/os-release
	I1223 00:00:38.983309  687772 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1223 00:00:38.983340  687772 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1223 00:00:38.983352  687772 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-72233/.minikube/addons for local assets ...
	I1223 00:00:38.983401  687772 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-72233/.minikube/files for local assets ...
	I1223 00:00:38.983478  687772 filesync.go:149] local asset: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem -> 758032.pem in /etc/ssl/certs
	I1223 00:00:38.983566  687772 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1223 00:00:38.991328  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem --> /etc/ssl/certs/758032.pem (1708 bytes)
	I1223 00:00:39.009038  687772 start.go:296] duration metric: took 155.718049ms for postStartSetup
	I1223 00:00:39.009116  687772 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1223 00:00:39.009156  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:39.028095  687772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/newest-cni-348344/id_rsa Username:docker}
	W1223 00:00:36.464751  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:38.465798  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	I1223 00:00:39.126878  687772 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1223 00:00:39.131527  687772 fix.go:56] duration metric: took 4.874800463s for fixHost
	I1223 00:00:39.131555  687772 start.go:83] releasing machines lock for "newest-cni-348344", held for 4.874847678s
	I1223 00:00:39.131653  687772 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-348344
	I1223 00:00:39.149572  687772 ssh_runner.go:195] Run: cat /version.json
	I1223 00:00:39.149644  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:39.149684  687772 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1223 00:00:39.149791  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:39.169897  687772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/newest-cni-348344/id_rsa Username:docker}
	I1223 00:00:39.170262  687772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/newest-cni-348344/id_rsa Username:docker}
	I1223 00:00:39.329086  687772 ssh_runner.go:195] Run: systemctl --version
	I1223 00:00:39.336405  687772 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1223 00:00:39.341291  687772 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1223 00:00:39.341348  687772 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1223 00:00:39.349279  687772 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1223 00:00:39.349306  687772 start.go:496] detecting cgroup driver to use...
	I1223 00:00:39.349351  687772 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1223 00:00:39.349509  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1223 00:00:39.363512  687772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1223 00:00:39.372815  687772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1223 00:00:39.381448  687772 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1223 00:00:39.381508  687772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1223 00:00:39.390109  687772 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1223 00:00:39.399162  687772 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1223 00:00:39.408060  687772 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1223 00:00:39.416584  687772 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1223 00:00:39.424711  687772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1223 00:00:39.433432  687772 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1223 00:00:39.442056  687772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1223 00:00:39.450905  687772 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1223 00:00:39.458512  687772 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1223 00:00:39.466991  687772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1223 00:00:39.550442  687772 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1223 00:00:39.632902  687772 start.go:496] detecting cgroup driver to use...
	I1223 00:00:39.632954  687772 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1223 00:00:39.633002  687772 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1223 00:00:39.646974  687772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1223 00:00:39.659240  687772 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1223 00:00:39.675979  687772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1223 00:00:39.688406  687772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1223 00:00:39.700912  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1223 00:00:39.715235  687772 ssh_runner.go:195] Run: which cri-dockerd
	I1223 00:00:39.718945  687772 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1223 00:00:39.726972  687772 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1223 00:00:39.739559  687772 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1223 00:00:39.825824  687772 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1223 00:00:39.909620  687772 docker.go:578] configuring docker to use "cgroupfs" as cgroup driver...
	I1223 00:00:39.909743  687772 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1223 00:00:39.923453  687772 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1223 00:00:39.935401  687772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1223 00:00:40.031388  687772 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1223 00:00:40.779801  687772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1223 00:00:40.794700  687772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1223 00:00:40.807727  687772 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1223 00:00:40.821730  687772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1223 00:00:40.835038  687772 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1223 00:00:40.917554  687772 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1223 00:00:41.006720  687772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1223 00:00:41.090943  687772 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1223 00:00:41.119047  687772 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1223 00:00:41.131277  687772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1223 00:00:41.215437  687772 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1223 00:00:41.283622  687772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1223 00:00:41.298485  687772 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1223 00:00:41.298551  687772 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1223 00:00:41.302554  687772 start.go:564] Will wait 60s for crictl version
	I1223 00:00:41.302645  687772 ssh_runner.go:195] Run: which crictl
	I1223 00:00:41.306307  687772 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1223 00:00:41.332425  687772 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1223 00:00:41.332495  687772 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1223 00:00:41.358777  687772 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1223 00:00:41.385668  687772 out.go:252] * Preparing Kubernetes v1.35.0-rc.1 on Docker 29.1.3 ...
	I1223 00:00:41.385749  687772 cli_runner.go:164] Run: docker network inspect newest-cni-348344 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1223 00:00:41.403696  687772 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1223 00:00:41.407961  687772 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1223 00:00:41.419714  687772 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1223 00:00:38.997749  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:40.998211  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:00:41.420647  687772 kubeadm.go:884] updating cluster {Name:newest-cni-348344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-348344 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1223 00:00:41.420848  687772 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1223 00:00:41.420937  687772 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1223 00:00:41.444049  687772 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1223 00:00:41.444075  687772 docker.go:624] Images already preloaded, skipping extraction
	I1223 00:00:41.444128  687772 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1223 00:00:41.466181  687772 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1223 00:00:41.466206  687772 cache_images.go:86] Images are preloaded, skipping loading
	I1223 00:00:41.466215  687772 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0-rc.1 docker true true} ...
	I1223 00:00:41.466312  687772 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-348344 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-348344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1223 00:00:41.466372  687772 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1223 00:00:41.520065  687772 cni.go:84] Creating CNI manager for ""
	I1223 00:00:41.520097  687772 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1223 00:00:41.520116  687772 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1223 00:00:41.520151  687772 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-348344 NodeName:newest-cni-348344 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1223 00:00:41.520280  687772 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-348344"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1223 00:00:41.520348  687772 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1223 00:00:41.529909  687772 binaries.go:51] Found k8s binaries, skipping transfer
	I1223 00:00:41.529987  687772 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1223 00:00:41.538670  687772 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1223 00:00:41.552566  687772 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1223 00:00:41.565766  687772 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1223 00:00:41.578604  687772 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1223 00:00:41.582388  687772 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1223 00:00:41.592239  687772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1223 00:00:41.689716  687772 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1223 00:00:41.716265  687772 certs.go:69] Setting up /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344 for IP: 192.168.94.2
	I1223 00:00:41.716293  687772 certs.go:195] generating shared ca certs ...
	I1223 00:00:41.716315  687772 certs.go:227] acquiring lock for ca certs: {Name:mk952cc8302daab7c0050aedd5db4002f6808128 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1223 00:00:41.716492  687772 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.key
	I1223 00:00:41.716548  687772 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.key
	I1223 00:00:41.716557  687772 certs.go:257] generating profile certs ...
	I1223 00:00:41.716731  687772 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/client.key
	I1223 00:00:41.716814  687772 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/apiserver.key.3654ac73
	I1223 00:00:41.716864  687772 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/proxy-client.key
	I1223 00:00:41.716992  687772 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803.pem (1338 bytes)
	W1223 00:00:41.717032  687772 certs.go:480] ignoring /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803_empty.pem, impossibly tiny 0 bytes
	I1223 00:00:41.717041  687772 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem (1675 bytes)
	I1223 00:00:41.717076  687772 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem (1082 bytes)
	I1223 00:00:41.717110  687772 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem (1123 bytes)
	I1223 00:00:41.717142  687772 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem (1679 bytes)
	I1223 00:00:41.717206  687772 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem (1708 bytes)
	I1223 00:00:41.718210  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1223 00:00:41.739858  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1223 00:00:41.759304  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1223 00:00:41.777433  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1223 00:00:41.794878  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1223 00:00:41.811695  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1223 00:00:41.829786  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1223 00:00:41.846666  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1223 00:00:41.863546  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem --> /usr/share/ca-certificates/758032.pem (1708 bytes)
	I1223 00:00:41.880762  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1223 00:00:41.898275  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803.pem --> /usr/share/ca-certificates/75803.pem (1338 bytes)
	I1223 00:00:41.915259  687772 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1223 00:00:41.927541  687772 ssh_runner.go:195] Run: openssl version
	I1223 00:00:41.933577  687772 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1223 00:00:41.940758  687772 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1223 00:00:41.948346  687772 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1223 00:00:41.952096  687772 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 22:33 /usr/share/ca-certificates/minikubeCA.pem
	I1223 00:00:41.952140  687772 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1223 00:00:41.987400  687772 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1223 00:00:41.995158  687772 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/75803.pem
	I1223 00:00:42.002657  687772 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/75803.pem /etc/ssl/certs/75803.pem
	I1223 00:00:42.010346  687772 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75803.pem
	I1223 00:00:42.014050  687772 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 22:42 /usr/share/ca-certificates/75803.pem
	I1223 00:00:42.014097  687772 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75803.pem
	I1223 00:00:42.048067  687772 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1223 00:00:42.055784  687772 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/758032.pem
	I1223 00:00:42.063393  687772 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/758032.pem /etc/ssl/certs/758032.pem
	I1223 00:00:42.071081  687772 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/758032.pem
	I1223 00:00:42.075446  687772 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 22:42 /usr/share/ca-certificates/758032.pem
	I1223 00:00:42.075515  687772 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/758032.pem
	I1223 00:00:42.110438  687772 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1223 00:00:42.118365  687772 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1223 00:00:42.122225  687772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1223 00:00:42.157294  687772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1223 00:00:42.192810  687772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1223 00:00:42.226497  687772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1223 00:00:42.261017  687772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1223 00:00:42.299910  687772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1223 00:00:42.335335  687772 kubeadm.go:401] StartCluster: {Name:newest-cni-348344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-348344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1223 00:00:42.335492  687772 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1223 00:00:42.355576  687772 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1223 00:00:42.364792  687772 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1223 00:00:42.364813  687772 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1223 00:00:42.364868  687772 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1223 00:00:42.373141  687772 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1223 00:00:42.374088  687772 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-348344" does not appear in /home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1223 00:00:42.374674  687772 kubeconfig.go:62] /home/jenkins/minikube-integration/22301-72233/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-348344" cluster setting kubeconfig missing "newest-cni-348344" context setting]
	I1223 00:00:42.375665  687772 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/kubeconfig: {Name:mkabb5ea92c3fe748f610038efb5c58128364c71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1223 00:00:42.377667  687772 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1223 00:00:42.385784  687772 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1223 00:00:42.385817  687772 kubeadm.go:602] duration metric: took 20.99658ms to restartPrimaryControlPlane
	I1223 00:00:42.385829  687772 kubeadm.go:403] duration metric: took 50.508354ms to StartCluster
	I1223 00:00:42.385848  687772 settings.go:142] acquiring lock: {Name:mk05aa406dacdbba79fec0b7e7f355491ea46bf8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1223 00:00:42.385918  687772 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1223 00:00:42.387577  687772 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/kubeconfig: {Name:mkabb5ea92c3fe748f610038efb5c58128364c71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1223 00:00:42.387909  687772 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1223 00:00:42.388038  687772 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1223 00:00:42.388131  687772 config.go:182] Loaded profile config "newest-cni-348344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1223 00:00:42.388136  687772 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-348344"
	I1223 00:00:42.388154  687772 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-348344"
	I1223 00:00:42.388170  687772 addons.go:70] Setting dashboard=true in profile "newest-cni-348344"
	I1223 00:00:42.388189  687772 addons.go:70] Setting default-storageclass=true in profile "newest-cni-348344"
	I1223 00:00:42.388208  687772 addons.go:239] Setting addon dashboard=true in "newest-cni-348344"
	W1223 00:00:42.388222  687772 addons.go:248] addon dashboard should already be in state true
	I1223 00:00:42.388226  687772 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-348344"
	I1223 00:00:42.388262  687772 host.go:66] Checking if "newest-cni-348344" exists ...
	I1223 00:00:42.388184  687772 host.go:66] Checking if "newest-cni-348344" exists ...
	I1223 00:00:42.388611  687772 cli_runner.go:164] Run: docker container inspect newest-cni-348344 --format={{.State.Status}}
	I1223 00:00:42.388764  687772 cli_runner.go:164] Run: docker container inspect newest-cni-348344 --format={{.State.Status}}
	I1223 00:00:42.388771  687772 cli_runner.go:164] Run: docker container inspect newest-cni-348344 --format={{.State.Status}}
	I1223 00:00:42.390679  687772 out.go:179] * Verifying Kubernetes components...
	I1223 00:00:42.391709  687772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1223 00:00:42.411960  687772 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1223 00:00:42.411964  687772 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1223 00:00:42.412088  687772 addons.go:239] Setting addon default-storageclass=true in "newest-cni-348344"
	I1223 00:00:42.412122  687772 host.go:66] Checking if "newest-cni-348344" exists ...
	I1223 00:00:42.412456  687772 cli_runner.go:164] Run: docker container inspect newest-cni-348344 --format={{.State.Status}}
	I1223 00:00:42.413023  687772 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1223 00:00:42.413043  687772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1223 00:00:42.413094  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:42.416720  687772 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1223 00:00:42.418014  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1223 00:00:42.418038  687772 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1223 00:00:42.418101  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:42.435878  687772 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1223 00:00:42.435898  687772 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1223 00:00:42.435951  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:42.437317  687772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/newest-cni-348344/id_rsa Username:docker}
	I1223 00:00:42.440138  687772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/newest-cni-348344/id_rsa Username:docker}
	I1223 00:00:42.468807  687772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/newest-cni-348344/id_rsa Username:docker}
	I1223 00:00:42.547260  687772 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1223 00:00:42.600477  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1223 00:00:42.600501  687772 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1223 00:00:42.601363  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1223 00:00:42.607651  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1223 00:00:42.614184  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1223 00:00:42.614204  687772 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1223 00:00:42.628125  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1223 00:00:42.628154  687772 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1223 00:00:42.643093  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1223 00:00:42.643121  687772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1223 00:00:42.702024  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1223 00:00:42.702052  687772 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1223 00:00:42.716211  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1223 00:00:42.716238  687772 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1223 00:00:42.729547  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1223 00:00:42.729569  687772 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1223 00:00:42.742218  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1223 00:00:42.742246  687772 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1223 00:00:42.754716  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1223 00:00:42.754741  687772 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1223 00:00:42.767403  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1223 00:00:43.251127  687772 api_server.go:52] waiting for apiserver process to appear ...
	W1223 00:00:43.251190  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1223 00:00:43.251231  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:43.251243  687772 retry.go:84] will retry after 100ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:43.251206  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:00:43.251536  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:43.392258  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:00:43.445582  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:43.478824  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1223 00:00:43.509277  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:00:43.537617  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1223 00:00:43.565023  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:43.661224  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:00:43.715310  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:43.751478  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:44.004029  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:00:44.062136  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1223 00:00:40.965708  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:43.465160  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:43.498161  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:45.997960  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:00:44.088452  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:00:44.143262  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:44.251335  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:44.383985  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1223 00:00:44.396693  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:00:44.439768  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1223 00:00:44.455552  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:44.752205  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:44.902917  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:00:44.962468  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:45.251805  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:45.326701  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:00:45.400433  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:45.424647  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:00:45.480956  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:45.751310  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:45.799387  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:00:45.854148  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:46.251658  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:46.752208  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:46.836006  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:00:46.890186  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:46.995326  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:00:47.057423  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:47.251702  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:47.426474  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:00:47.485952  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:47.752131  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:48.207567  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1223 00:00:48.251315  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:00:48.262862  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:48.519836  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:00:48.578806  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:48.752112  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:00:45.465342  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:47.465739  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:47.998170  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:50.498122  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:00:49.251876  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:49.751844  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:49.890160  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:00:49.947020  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:50.066047  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:00:50.120129  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:50.251336  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:50.558241  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:00:50.615271  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:50.751572  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:51.252351  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:51.751913  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:52.251786  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:52.752327  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:52.791985  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:00:52.850108  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:53.251446  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:53.751646  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:00:49.964544  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:51.964692  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:53.964842  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:52.997659  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:54.998340  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:00:54.252244  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:54.751444  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:55.251848  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:55.347206  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:00:55.401615  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:55.751830  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:55.936027  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:00:55.995088  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:56.251432  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:56.751824  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:57.111686  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:00:57.169648  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:57.251735  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:57.751676  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:58.251279  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:58.751377  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:00:55.965091  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:58.465307  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	I1223 00:00:59.465067  679852 pod_ready.go:94] pod "coredns-66bc5c9577-v4sr7" is "Ready"
	I1223 00:00:59.465093  679852 pod_ready.go:86] duration metric: took 31.505726579s for pod "coredns-66bc5c9577-v4sr7" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:00:59.467499  679852 pod_ready.go:83] waiting for pod "etcd-kubenet-003676" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:00:59.471040  679852 pod_ready.go:94] pod "etcd-kubenet-003676" is "Ready"
	I1223 00:00:59.471063  679852 pod_ready.go:86] duration metric: took 3.544638ms for pod "etcd-kubenet-003676" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:00:59.472907  679852 pod_ready.go:83] waiting for pod "kube-apiserver-kubenet-003676" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:00:59.476385  679852 pod_ready.go:94] pod "kube-apiserver-kubenet-003676" is "Ready"
	I1223 00:00:59.476406  679852 pod_ready.go:86] duration metric: took 3.481083ms for pod "kube-apiserver-kubenet-003676" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:00:59.478385  679852 pod_ready.go:83] waiting for pod "kube-controller-manager-kubenet-003676" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:00:59.663149  679852 pod_ready.go:94] pod "kube-controller-manager-kubenet-003676" is "Ready"
	I1223 00:00:59.663178  679852 pod_ready.go:86] duration metric: took 184.769862ms for pod "kube-controller-manager-kubenet-003676" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:00:59.863586  679852 pod_ready.go:83] waiting for pod "kube-proxy-4ftjm" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:01:00.263634  679852 pod_ready.go:94] pod "kube-proxy-4ftjm" is "Ready"
	I1223 00:01:00.263661  679852 pod_ready.go:86] duration metric: took 400.030267ms for pod "kube-proxy-4ftjm" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:01:00.464316  679852 pod_ready.go:83] waiting for pod "kube-scheduler-kubenet-003676" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:01:00.863672  679852 pod_ready.go:94] pod "kube-scheduler-kubenet-003676" is "Ready"
	I1223 00:01:00.863704  679852 pod_ready.go:86] duration metric: took 399.359894ms for pod "kube-scheduler-kubenet-003676" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:01:00.863716  679852 pod_ready.go:40] duration metric: took 32.907880274s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1223 00:01:00.909769  679852 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1223 00:01:00.911549  679852 out.go:179] * Done! kubectl is now configured to use "kubenet-003676" cluster and "default" namespace by default
	W1223 00:00:57.497653  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:59.497943  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:00:59.251660  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:59.751533  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:00.252079  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:00.347519  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:01:00.405959  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:00.406017  687772 retry.go:84] will retry after 5.4s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:00.564176  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:01:00.620480  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:00.751684  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:01.251318  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:01.751824  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:02.252159  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:02.751813  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:03.251679  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:03.752247  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:01:01.997413  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:03.998103  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:06.498109  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:04.252308  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:04.751451  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:05.252117  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:05.751808  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:05.759328  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:01:05.824086  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:05.824133  687772 retry.go:84] will retry after 18.8s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:06.105622  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:01:06.162303  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:06.251478  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:06.751336  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:07.251717  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:07.751827  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:08.251532  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:08.751779  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:01:08.998157  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:11.498214  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:09.251540  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:09.503324  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:01:09.562697  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:09.562742  687772 retry.go:84] will retry after 15.6s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:09.751986  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:10.251441  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:10.751356  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:11.251389  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:11.752279  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:12.251802  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:12.751324  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:13.251818  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:13.751866  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:01:13.998099  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:16.497424  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:14.252139  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:14.751583  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:15.252310  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:15.751625  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:16.251842  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:16.751830  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:17.252084  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:17.751783  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:18.251376  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:18.751964  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:01:18.498157  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:20.998023  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:19.252110  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:19.751822  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:20.251704  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:20.568697  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:01:20.639449  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:20.639500  687772 retry.go:84] will retry after 12s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:20.751734  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:21.252249  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:21.751801  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:22.251412  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:22.751366  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:23.252197  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:23.751273  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:01:23.498226  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:25.998233  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:24.252133  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:24.590346  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:01:24.662633  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:24.662674  687772 retry.go:84] will retry after 26.5s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:24.751773  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:25.211650  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1223 00:01:25.252110  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:01:25.284581  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:25.752154  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:26.251728  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:26.751953  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:27.251838  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:27.751740  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:28.251778  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:28.752182  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:01:28.498149  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:30.998068  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:29.252321  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:29.752290  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:30.251523  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:30.752205  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:31.251957  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:31.751675  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:32.251501  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:32.610650  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:01:32.668272  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:32.668321  687772 retry.go:84] will retry after 25.8s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:32.751294  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:33.251566  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:33.751514  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:01:33.497906  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:35.997708  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:34.251717  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:34.751347  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:35.251743  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:35.751789  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:36.251397  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:36.751367  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:37.251392  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:37.751873  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:38.251848  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:38.751798  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:01:38.498282  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:40.998196  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:39.251316  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:39.752288  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:40.251339  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:40.751372  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:41.252071  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:41.752308  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:42.252071  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:42.752021  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:01:42.776761  687772 logs.go:282] 0 containers: []
	W1223 00:01:42.776791  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:01:42.776839  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:01:42.797079  687772 logs.go:282] 0 containers: []
	W1223 00:01:42.797110  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:01:42.797168  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:01:42.817812  687772 logs.go:282] 0 containers: []
	W1223 00:01:42.817839  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:01:42.817907  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:01:42.837835  687772 logs.go:282] 0 containers: []
	W1223 00:01:42.837868  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:01:42.837924  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:01:42.858165  687772 logs.go:282] 0 containers: []
	W1223 00:01:42.858188  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:01:42.858231  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:01:42.878211  687772 logs.go:282] 0 containers: []
	W1223 00:01:42.878236  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:01:42.878281  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:01:42.897739  687772 logs.go:282] 0 containers: []
	W1223 00:01:42.897762  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:01:42.897817  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:01:42.918314  687772 logs.go:282] 0 containers: []
	W1223 00:01:42.918340  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:01:42.918352  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:01:42.918362  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:01:42.965734  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:01:42.965766  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:01:42.986555  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:01:42.986585  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:01:43.052069  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:01:43.042198    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:43.042815    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:43.044868    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:43.045810    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:43.047451    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:01:43.042198    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:43.042815    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:43.044868    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:43.045810    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:43.047451    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:01:43.052092  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:01:43.052108  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:01:43.072511  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:01:43.072542  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1223 00:01:43.497482  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:45.497538  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:45.620354  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:45.631856  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:01:45.651448  687772 logs.go:282] 0 containers: []
	W1223 00:01:45.651473  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:01:45.651528  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:01:45.671204  687772 logs.go:282] 0 containers: []
	W1223 00:01:45.671229  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:01:45.671279  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:01:45.690397  687772 logs.go:282] 0 containers: []
	W1223 00:01:45.690418  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:01:45.690470  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:01:45.711316  687772 logs.go:282] 0 containers: []
	W1223 00:01:45.711342  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:01:45.711400  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:01:45.731731  687772 logs.go:282] 0 containers: []
	W1223 00:01:45.731755  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:01:45.731812  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:01:45.751415  687772 logs.go:282] 0 containers: []
	W1223 00:01:45.751442  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:01:45.751500  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:01:45.770135  687772 logs.go:282] 0 containers: []
	W1223 00:01:45.770157  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:01:45.770202  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:01:45.789088  687772 logs.go:282] 0 containers: []
	W1223 00:01:45.789114  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:01:45.789127  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:01:45.789139  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:01:45.819714  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:01:45.819741  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:01:45.867983  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:01:45.868013  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:01:45.888018  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:01:45.888051  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:01:45.945247  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:01:45.937257    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:45.937924    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:45.939479    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:45.940165    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:45.941893    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:01:45.937257    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:45.937924    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:45.939479    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:45.940165    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:45.941893    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:01:45.945270  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:01:45.945286  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:01:47.390289  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:01:47.445750  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:47.445788  687772 retry.go:84] will retry after 18.6s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:48.464795  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:48.476515  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:01:48.499002  687772 logs.go:282] 0 containers: []
	W1223 00:01:48.499028  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:01:48.499082  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:01:48.523130  687772 logs.go:282] 0 containers: []
	W1223 00:01:48.523165  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:01:48.523225  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:01:48.547882  687772 logs.go:282] 0 containers: []
	W1223 00:01:48.547904  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:01:48.547950  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:01:48.567534  687772 logs.go:282] 0 containers: []
	W1223 00:01:48.567560  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:01:48.567627  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:01:48.590197  687772 logs.go:282] 0 containers: []
	W1223 00:01:48.590222  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:01:48.590280  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:01:48.609279  687772 logs.go:282] 0 containers: []
	W1223 00:01:48.609306  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:01:48.609357  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:01:48.628700  687772 logs.go:282] 0 containers: []
	W1223 00:01:48.628731  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:01:48.628795  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:01:48.648378  687772 logs.go:282] 0 containers: []
	W1223 00:01:48.648402  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:01:48.648416  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:01:48.648433  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:01:48.672700  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:01:48.672741  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:01:48.743864  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:01:48.735937    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:48.736663    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:48.737624    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:48.739145    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:48.739683    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:01:48.735937    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:48.736663    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:48.737624    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:48.739145    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:48.739683    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:01:48.743885  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:01:48.743897  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:01:48.762911  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:01:48.762941  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:01:48.793205  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:01:48.793235  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1223 00:01:47.498181  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:49.998144  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:51.128346  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:01:51.182480  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:51.182522  687772 retry.go:84] will retry after 29.5s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:51.341781  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:51.353222  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:01:51.373421  687772 logs.go:282] 0 containers: []
	W1223 00:01:51.373447  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:01:51.373497  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:01:51.392557  687772 logs.go:282] 0 containers: []
	W1223 00:01:51.392613  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:01:51.392675  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:01:51.411939  687772 logs.go:282] 0 containers: []
	W1223 00:01:51.411964  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:01:51.412018  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:01:51.430968  687772 logs.go:282] 0 containers: []
	W1223 00:01:51.430998  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:01:51.431054  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:01:51.451234  687772 logs.go:282] 0 containers: []
	W1223 00:01:51.451266  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:01:51.451329  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:01:51.470904  687772 logs.go:282] 0 containers: []
	W1223 00:01:51.470947  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:01:51.471017  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:01:51.490121  687772 logs.go:282] 0 containers: []
	W1223 00:01:51.490146  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:01:51.490201  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:01:51.511843  687772 logs.go:282] 0 containers: []
	W1223 00:01:51.511875  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:01:51.511891  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:01:51.511906  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:01:51.545106  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:01:51.545138  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:01:51.594541  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:01:51.594576  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:01:51.615455  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:01:51.615485  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:01:51.680061  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:01:51.672760    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:51.673328    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:51.674906    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:51.675511    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:51.677002    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:01:51.672760    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:51.673328    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:51.674906    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:51.675511    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:51.677002    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:01:51.680080  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:01:51.680091  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1223 00:01:52.497841  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:54.498062  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:56.498119  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:54.199432  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:54.211004  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:01:54.230469  687772 logs.go:282] 0 containers: []
	W1223 00:01:54.230498  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:01:54.230548  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:01:54.251212  687772 logs.go:282] 0 containers: []
	W1223 00:01:54.251245  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:01:54.251300  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:01:54.274147  687772 logs.go:282] 0 containers: []
	W1223 00:01:54.274177  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:01:54.274238  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:01:54.297381  687772 logs.go:282] 0 containers: []
	W1223 00:01:54.297413  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:01:54.297471  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:01:54.316290  687772 logs.go:282] 0 containers: []
	W1223 00:01:54.316315  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:01:54.316362  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:01:54.335315  687772 logs.go:282] 0 containers: []
	W1223 00:01:54.335339  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:01:54.335393  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:01:54.354058  687772 logs.go:282] 0 containers: []
	W1223 00:01:54.354089  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:01:54.354144  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:01:54.372661  687772 logs.go:282] 0 containers: []
	W1223 00:01:54.372686  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:01:54.372700  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:01:54.372715  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:01:54.417565  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:01:54.417601  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:01:54.438013  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:01:54.438041  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:01:54.497552  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:01:54.488781    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:54.489417    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:54.491051    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:54.491532    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:54.493218    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:01:54.488781    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:54.489417    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:54.491051    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:54.491532    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:54.493218    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:01:54.497575  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:01:54.497589  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:01:54.517495  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:01:54.517523  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:01:57.056037  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:57.067478  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:01:57.087084  687772 logs.go:282] 0 containers: []
	W1223 00:01:57.087114  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:01:57.087183  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:01:57.105193  687772 logs.go:282] 0 containers: []
	W1223 00:01:57.105218  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:01:57.105270  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:01:57.122859  687772 logs.go:282] 0 containers: []
	W1223 00:01:57.122885  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:01:57.122931  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:01:57.141996  687772 logs.go:282] 0 containers: []
	W1223 00:01:57.142021  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:01:57.142074  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:01:57.160005  687772 logs.go:282] 0 containers: []
	W1223 00:01:57.160032  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:01:57.160083  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:01:57.178889  687772 logs.go:282] 0 containers: []
	W1223 00:01:57.178915  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:01:57.178989  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:01:57.196419  687772 logs.go:282] 0 containers: []
	W1223 00:01:57.196446  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:01:57.196498  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:01:57.214764  687772 logs.go:282] 0 containers: []
	W1223 00:01:57.214790  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:01:57.214804  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:01:57.214817  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:01:57.266333  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:01:57.266370  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:01:57.289301  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:01:57.289330  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:01:57.347060  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:01:57.339792    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:57.340363    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:57.341983    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:57.342452    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:57.344014    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:01:57.339792    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:57.340363    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:57.341983    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:57.342452    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:57.344014    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:01:57.347099  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:01:57.347117  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:01:57.370222  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:01:57.370257  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:01:58.466074  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:01:58.519779  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:58.519828  687772 retry.go:84] will retry after 42.4s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1223 00:01:58.498177  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:02:00.998033  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:59.898063  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:59.909410  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:01:59.927950  687772 logs.go:282] 0 containers: []
	W1223 00:01:59.927974  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:01:59.928017  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:01:59.946773  687772 logs.go:282] 0 containers: []
	W1223 00:01:59.946800  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:01:59.946861  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:01:59.964419  687772 logs.go:282] 0 containers: []
	W1223 00:01:59.964443  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:01:59.964500  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:01:59.982454  687772 logs.go:282] 0 containers: []
	W1223 00:01:59.982478  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:01:59.982537  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:00.000838  687772 logs.go:282] 0 containers: []
	W1223 00:02:00.000860  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:00.000929  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:00.018673  687772 logs.go:282] 0 containers: []
	W1223 00:02:00.018696  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:00.018747  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:00.035944  687772 logs.go:282] 0 containers: []
	W1223 00:02:00.035973  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:00.036027  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:00.053640  687772 logs.go:282] 0 containers: []
	W1223 00:02:00.053666  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:00.053679  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:00.053700  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:00.098844  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:00.098870  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:00.120198  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:00.120224  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:00.175459  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:00.168568    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:00.169147    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:00.170692    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:00.171078    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:00.172563    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:00.168568    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:00.169147    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:00.170692    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:00.171078    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:00.172563    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:00.175477  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:00.175490  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:00.194106  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:00.194146  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:02.722361  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:02.733721  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:02.753939  687772 logs.go:282] 0 containers: []
	W1223 00:02:02.753963  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:02.754025  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:02.773570  687772 logs.go:282] 0 containers: []
	W1223 00:02:02.773610  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:02.773665  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:02.793427  687772 logs.go:282] 0 containers: []
	W1223 00:02:02.793451  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:02.793514  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:02.812154  687772 logs.go:282] 0 containers: []
	W1223 00:02:02.812183  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:02.812241  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:02.830757  687772 logs.go:282] 0 containers: []
	W1223 00:02:02.830777  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:02.830819  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:02.849140  687772 logs.go:282] 0 containers: []
	W1223 00:02:02.849163  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:02.849206  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:02.867505  687772 logs.go:282] 0 containers: []
	W1223 00:02:02.867529  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:02.867584  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:02.885892  687772 logs.go:282] 0 containers: []
	W1223 00:02:02.885920  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:02.885935  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:02.885950  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:02.933880  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:02.933906  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:02.955273  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:02.955304  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:03.009924  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:03.002806    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:03.003364    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:03.004891    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:03.005360    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:03.006852    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:03.002806    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:03.003364    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:03.004891    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:03.005360    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:03.006852    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:03.009951  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:03.010012  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:03.028798  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:03.028821  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1223 00:02:03.497953  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:02:05.997506  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:02:05.557718  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:05.569769  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:05.588873  687772 logs.go:282] 0 containers: []
	W1223 00:02:05.588899  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:05.588946  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:05.607265  687772 logs.go:282] 0 containers: []
	W1223 00:02:05.607289  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:05.607342  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:05.625761  687772 logs.go:282] 0 containers: []
	W1223 00:02:05.625790  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:05.625860  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:05.643443  687772 logs.go:282] 0 containers: []
	W1223 00:02:05.643464  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:05.643513  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:05.661241  687772 logs.go:282] 0 containers: []
	W1223 00:02:05.661266  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:05.661314  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:05.679744  687772 logs.go:282] 0 containers: []
	W1223 00:02:05.679764  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:05.679805  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:05.697808  687772 logs.go:282] 0 containers: []
	W1223 00:02:05.697831  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:05.697878  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:05.716222  687772 logs.go:282] 0 containers: []
	W1223 00:02:05.716245  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:05.716255  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:05.716269  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:05.772404  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:05.772437  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:05.793610  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:05.793643  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:05.849453  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:05.842499    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:05.842938    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:05.844491    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:05.844916    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:05.846469    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:05.842499    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:05.842938    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:05.844491    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:05.844916    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:05.846469    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:05.849479  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:05.849493  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:05.868250  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:05.868273  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:05.997019  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:02:06.048845  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1223 00:02:06.048961  687772 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1223 00:02:08.398283  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:08.409794  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:08.428854  687772 logs.go:282] 0 containers: []
	W1223 00:02:08.428878  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:08.428927  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:08.447223  687772 logs.go:282] 0 containers: []
	W1223 00:02:08.447248  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:08.447292  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:08.464797  687772 logs.go:282] 0 containers: []
	W1223 00:02:08.464816  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:08.464857  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:08.483364  687772 logs.go:282] 0 containers: []
	W1223 00:02:08.483386  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:08.483449  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:08.501985  687772 logs.go:282] 0 containers: []
	W1223 00:02:08.502014  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:08.502064  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:08.520995  687772 logs.go:282] 0 containers: []
	W1223 00:02:08.521020  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:08.521071  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:08.542089  687772 logs.go:282] 0 containers: []
	W1223 00:02:08.542109  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:08.542149  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:08.560448  687772 logs.go:282] 0 containers: []
	W1223 00:02:08.560468  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:08.560478  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:08.560489  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:08.606044  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:08.606073  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:08.626314  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:08.626339  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:08.681455  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:08.674268    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:08.674839    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:08.676419    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:08.676835    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:08.678358    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:08.674268    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:08.674839    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:08.676419    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:08.676835    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:08.678358    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:08.681477  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:08.681490  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:08.699804  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:08.699830  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1223 00:02:07.998312  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:02:10.498076  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:02:11.230050  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:11.241320  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:11.260234  687772 logs.go:282] 0 containers: []
	W1223 00:02:11.260256  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:11.260301  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:11.279535  687772 logs.go:282] 0 containers: []
	W1223 00:02:11.279558  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:11.279635  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:11.297775  687772 logs.go:282] 0 containers: []
	W1223 00:02:11.297799  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:11.297843  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:11.315756  687772 logs.go:282] 0 containers: []
	W1223 00:02:11.315780  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:11.315823  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:11.333828  687772 logs.go:282] 0 containers: []
	W1223 00:02:11.333850  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:11.333894  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:11.351608  687772 logs.go:282] 0 containers: []
	W1223 00:02:11.351631  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:11.351673  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:11.371003  687772 logs.go:282] 0 containers: []
	W1223 00:02:11.371024  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:11.371065  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:11.388642  687772 logs.go:282] 0 containers: []
	W1223 00:02:11.388662  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:11.388671  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:11.388681  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:11.406664  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:11.406687  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:11.434828  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:11.434851  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:11.481940  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:11.481966  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:11.503051  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:11.503077  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:11.563912  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:11.555767    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:11.556288    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:11.558989    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:11.559407    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:11.560934    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:11.555767    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:11.556288    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:11.558989    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:11.559407    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:11.560934    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:14.064094  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:02:12.997638  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:02:15.497547  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:02:15.997757  622784 node_ready.go:38] duration metric: took 6m0.000870759s for node "no-preload-063943" to be "Ready" ...
	I1223 00:02:15.999489  622784 out.go:203] 
	W1223 00:02:16.002745  622784 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1223 00:02:16.002767  622784 out.go:285] * 
	W1223 00:02:16.002971  622784 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1223 00:02:16.004060  622784 out.go:203] 
	
	
	==> Docker <==
	Dec 22 23:56:13 no-preload-063943 dockerd[910]: time="2025-12-22T23:56:13.023421578Z" level=info msg="Restoring containers: start."
	Dec 22 23:56:13 no-preload-063943 dockerd[910]: time="2025-12-22T23:56:13.041299594Z" level=info msg="Deleting nftables IPv4 rules" error="exit status 1"
	Dec 22 23:56:13 no-preload-063943 dockerd[910]: time="2025-12-22T23:56:13.059213099Z" level=info msg="Deleting nftables IPv6 rules" error="exit status 1"
	Dec 22 23:56:13 no-preload-063943 dockerd[910]: time="2025-12-22T23:56:13.611358548Z" level=info msg="Loading containers: done."
	Dec 22 23:56:13 no-preload-063943 dockerd[910]: time="2025-12-22T23:56:13.620920645Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 22 23:56:13 no-preload-063943 dockerd[910]: time="2025-12-22T23:56:13.620957367Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 22 23:56:13 no-preload-063943 dockerd[910]: time="2025-12-22T23:56:13.620991634Z" level=info msg="Initializing buildkit"
	Dec 22 23:56:13 no-preload-063943 dockerd[910]: time="2025-12-22T23:56:13.639509005Z" level=info msg="Completed buildkit initialization"
	Dec 22 23:56:13 no-preload-063943 dockerd[910]: time="2025-12-22T23:56:13.645541635Z" level=info msg="Daemon has completed initialization"
	Dec 22 23:56:13 no-preload-063943 dockerd[910]: time="2025-12-22T23:56:13.645622881Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 22 23:56:13 no-preload-063943 dockerd[910]: time="2025-12-22T23:56:13.645627452Z" level=info msg="API listen on /run/docker.sock"
	Dec 22 23:56:13 no-preload-063943 dockerd[910]: time="2025-12-22T23:56:13.645628833Z" level=info msg="API listen on [::]:2376"
	Dec 22 23:56:13 no-preload-063943 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 22 23:56:14 no-preload-063943 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 22 23:56:14 no-preload-063943 cri-dockerd[1203]: time="2025-12-22T23:56:14Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 22 23:56:14 no-preload-063943 cri-dockerd[1203]: time="2025-12-22T23:56:14Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 22 23:56:14 no-preload-063943 cri-dockerd[1203]: time="2025-12-22T23:56:14Z" level=info msg="Start docker client with request timeout 0s"
	Dec 22 23:56:14 no-preload-063943 cri-dockerd[1203]: time="2025-12-22T23:56:14Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 22 23:56:14 no-preload-063943 cri-dockerd[1203]: time="2025-12-22T23:56:14Z" level=info msg="Loaded network plugin cni"
	Dec 22 23:56:14 no-preload-063943 cri-dockerd[1203]: time="2025-12-22T23:56:14Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 22 23:56:14 no-preload-063943 cri-dockerd[1203]: time="2025-12-22T23:56:14Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 22 23:56:14 no-preload-063943 cri-dockerd[1203]: time="2025-12-22T23:56:14Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 22 23:56:14 no-preload-063943 cri-dockerd[1203]: time="2025-12-22T23:56:14Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 22 23:56:14 no-preload-063943 cri-dockerd[1203]: time="2025-12-22T23:56:14Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 22 23:56:14 no-preload-063943 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:16.955769    7489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:16.956260    7489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:16.957931    7489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:16.958436    7489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:16.959996    7489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 32 44 b0 85 99 75 08 06
	[  +2.519484] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ca 64 f4 88 60 6a 08 06
	[  +0.000472] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 42 41 81 ba 80 a4 08 06
	[Dec22 23:59] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e 60 1e 9e f0 0c 08 06
	[  +0.088099] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff f6 12 57 26 ed f1 08 06
	[  +5.341024] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 46 24 97 27 5a ed 08 06
	[ +14.537406] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff da 72 df 3b 35 8d 08 06
	[  +0.000388] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 5e 60 1e 9e f0 0c 08 06
	[  +2.465032] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 84 3f 6a 28 22 08 06
	[  +0.000373] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 46 24 97 27 5a ed 08 06
	[Dec23 00:00] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 4e 53 f0 1e af dd 08 06
	[Dec23 00:01] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 20 71 68 66 a5 08 06
	[  +0.000346] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 4e 53 f0 1e af dd 08 06
	
	
	==> kernel <==
	 00:02:16 up  3:44,  0 user,  load average: 4.31, 2.85, 2.07
	Linux no-preload-063943 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 23 00:02:13 no-preload-063943 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 23 00:02:14 no-preload-063943 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 480.
	Dec 23 00:02:14 no-preload-063943 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 23 00:02:14 no-preload-063943 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 23 00:02:14 no-preload-063943 kubelet[7310]: E1223 00:02:14.534105    7310 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 23 00:02:14 no-preload-063943 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 23 00:02:14 no-preload-063943 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 23 00:02:15 no-preload-063943 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 481.
	Dec 23 00:02:15 no-preload-063943 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 23 00:02:15 no-preload-063943 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 23 00:02:15 no-preload-063943 kubelet[7321]: E1223 00:02:15.282661    7321 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 23 00:02:15 no-preload-063943 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 23 00:02:15 no-preload-063943 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 23 00:02:15 no-preload-063943 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 482.
	Dec 23 00:02:15 no-preload-063943 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 23 00:02:15 no-preload-063943 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 23 00:02:16 no-preload-063943 kubelet[7332]: E1223 00:02:16.044631    7332 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 23 00:02:16 no-preload-063943 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 23 00:02:16 no-preload-063943 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 23 00:02:16 no-preload-063943 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 483.
	Dec 23 00:02:16 no-preload-063943 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 23 00:02:16 no-preload-063943 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 23 00:02:16 no-preload-063943 kubelet[7394]: E1223 00:02:16.790271    7394 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 23 00:02:16 no-preload-063943 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 23 00:02:16 no-preload-063943 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-063943 -n no-preload-063943
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-063943 -n no-preload-063943: exit status 2 (320.965559ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "no-preload-063943" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (370.70s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (93.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-348344 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1222 23:59:06.386747   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/default-k8s-diff-port-700304/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:59:10.250076   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/kindnet-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-348344 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m31.765071703s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-348344 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-348344
helpers_test.go:244: (dbg) docker inspect newest-cni-348344:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "133dc19d84d424ed179e624a54285c88a37ad637a1692732b3536ec0f181551b",
	        "Created": "2025-12-22T23:50:45.124975619Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 562959,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T23:50:45.155175204Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9a87e850a5e640dd3e5f71477885272b970ba271e3722be8bebbe0157f704ffd",
	        "ResolvConfPath": "/var/lib/docker/containers/133dc19d84d424ed179e624a54285c88a37ad637a1692732b3536ec0f181551b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/133dc19d84d424ed179e624a54285c88a37ad637a1692732b3536ec0f181551b/hostname",
	        "HostsPath": "/var/lib/docker/containers/133dc19d84d424ed179e624a54285c88a37ad637a1692732b3536ec0f181551b/hosts",
	        "LogPath": "/var/lib/docker/containers/133dc19d84d424ed179e624a54285c88a37ad637a1692732b3536ec0f181551b/133dc19d84d424ed179e624a54285c88a37ad637a1692732b3536ec0f181551b-json.log",
	        "Name": "/newest-cni-348344",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-348344:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-348344",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "133dc19d84d424ed179e624a54285c88a37ad637a1692732b3536ec0f181551b",
	                "LowerDir": "/var/lib/docker/overlay2/6020e8f517a187af8c88e3692b2c53fcf5fcbaeb46fc7b99af192b869c28d41a-init/diff:/var/lib/docker/overlay2/c57dd1a41102d99c4ed6be3c60b871435428bd2cea6a3d8d172f0a67527ba009/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6020e8f517a187af8c88e3692b2c53fcf5fcbaeb46fc7b99af192b869c28d41a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6020e8f517a187af8c88e3692b2c53fcf5fcbaeb46fc7b99af192b869c28d41a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6020e8f517a187af8c88e3692b2c53fcf5fcbaeb46fc7b99af192b869c28d41a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-348344",
	                "Source": "/var/lib/docker/volumes/newest-cni-348344/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-348344",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-348344",
	                "name.minikube.sigs.k8s.io": "newest-cni-348344",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "c751392a43e7100580d0d30ecd5964e4f8e21563f623abfc3d9bd467dea01c55",
	            "SandboxKey": "/var/run/docker/netns/c751392a43e7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-348344": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1020bfe2df349af00e9e2f4197eff27d709a25503c20a26c662019055cba21bb",
	                    "EndpointID": "a82a4e068e03b8467a5f670e0962c28ad41d4e0a0ed6287f98c7bde60083d8db",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "1a:55:fe:37:1b:97",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-348344",
	                        "133dc19d84d4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-348344 -n newest-cni-348344
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-348344 -n newest-cni-348344: exit status 6 (333.478863ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1223 00:00:31.474149  686704 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-348344" does not appear in /home/jenkins/minikube-integration/22301-72233/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-348344 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                      ARGS                                                                                      │            PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-003676 sudo cat /etc/containerd/config.toml                                                                                                                          │ bridge-003676                 │ jenkins │ v1.37.0 │ 22 Dec 25 23:59 UTC │ 22 Dec 25 23:59 UTC │
	│ ssh     │ -p flannel-003676 sudo systemctl cat docker --no-pager                                                                                                                         │ flannel-003676                │ jenkins │ v1.37.0 │ 22 Dec 25 23:59 UTC │ 22 Dec 25 23:59 UTC │
	│ ssh     │ -p bridge-003676 sudo containerd config dump                                                                                                                                   │ bridge-003676                 │ jenkins │ v1.37.0 │ 22 Dec 25 23:59 UTC │                     │
	│ ssh     │ -p flannel-003676 sudo cat /etc/docker/daemon.json                                                                                                                             │ flannel-003676                │ jenkins │ v1.37.0 │ 22 Dec 25 23:59 UTC │ 22 Dec 25 23:59 UTC │
	│ ssh     │ -p bridge-003676 sudo systemctl status crio --all --full --no-pager                                                                                                            │ bridge-003676                 │ jenkins │ v1.37.0 │ 22 Dec 25 23:59 UTC │                     │
	│ ssh     │ -p flannel-003676 sudo docker system info                                                                                                                                      │ flannel-003676                │ jenkins │ v1.37.0 │ 22 Dec 25 23:59 UTC │ 22 Dec 25 23:59 UTC │
	│ ssh     │ -p flannel-003676 sudo systemctl status cri-docker --all --full --no-pager                                                                                                     │ flannel-003676                │ jenkins │ v1.37.0 │ 22 Dec 25 23:59 UTC │ 22 Dec 25 23:59 UTC │
	│ ssh     │ -p flannel-003676 sudo systemctl cat cri-docker --no-pager                                                                                                                     │ flannel-003676                │ jenkins │ v1.37.0 │ 22 Dec 25 23:59 UTC │ 22 Dec 25 23:59 UTC │
	│ ssh     │ -p flannel-003676 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                │ flannel-003676                │ jenkins │ v1.37.0 │ 22 Dec 25 23:59 UTC │ 22 Dec 25 23:59 UTC │
	│ delete  │ -p bridge-003676                                                                                                                                                               │ bridge-003676                 │ jenkins │ v1.37.0 │ 22 Dec 25 23:59 UTC │ 22 Dec 25 23:59 UTC │
	│ ssh     │ -p flannel-003676 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                          │ flannel-003676                │ jenkins │ v1.37.0 │ 22 Dec 25 23:59 UTC │ 22 Dec 25 23:59 UTC │
	│ ssh     │ -p flannel-003676 sudo cri-dockerd --version                                                                                                                                   │ flannel-003676                │ jenkins │ v1.37.0 │ 22 Dec 25 23:59 UTC │ 22 Dec 25 23:59 UTC │
	│ ssh     │ -p flannel-003676 sudo systemctl status containerd --all --full --no-pager                                                                                                     │ flannel-003676                │ jenkins │ v1.37.0 │ 22 Dec 25 23:59 UTC │ 22 Dec 25 23:59 UTC │
	│ ssh     │ -p flannel-003676 sudo systemctl cat containerd --no-pager                                                                                                                     │ flannel-003676                │ jenkins │ v1.37.0 │ 22 Dec 25 23:59 UTC │ 22 Dec 25 23:59 UTC │
	│ ssh     │ -p flannel-003676 sudo cat /lib/systemd/system/containerd.service                                                                                                              │ flannel-003676                │ jenkins │ v1.37.0 │ 22 Dec 25 23:59 UTC │ 22 Dec 25 23:59 UTC │
	│ ssh     │ -p flannel-003676 sudo cat /etc/containerd/config.toml                                                                                                                         │ flannel-003676                │ jenkins │ v1.37.0 │ 22 Dec 25 23:59 UTC │ 22 Dec 25 23:59 UTC │
	│ ssh     │ -p flannel-003676 sudo containerd config dump                                                                                                                                  │ flannel-003676                │ jenkins │ v1.37.0 │ 22 Dec 25 23:59 UTC │ 22 Dec 25 23:59 UTC │
	│ start   │ -p kubenet-003676 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker                          │ kubenet-003676                │ jenkins │ v1.37.0 │ 22 Dec 25 23:59 UTC │                     │
	│ ssh     │ -p flannel-003676 sudo systemctl cat crio --no-pager                                                                                                                           │ flannel-003676                │ jenkins │ v1.37.0 │ 22 Dec 25 23:59 UTC │ 22 Dec 25 23:59 UTC │
	│ ssh     │ -p flannel-003676 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                 │ flannel-003676                │ jenkins │ v1.37.0 │ 22 Dec 25 23:59 UTC │ 22 Dec 25 23:59 UTC │
	│ ssh     │ -p flannel-003676 sudo crio config                                                                                                                                             │ flannel-003676                │ jenkins │ v1.37.0 │ 22 Dec 25 23:59 UTC │ 22 Dec 25 23:59 UTC │
	│ delete  │ -p flannel-003676                                                                                                                                                              │ flannel-003676                │ jenkins │ v1.37.0 │ 22 Dec 25 23:59 UTC │ 22 Dec 25 23:59 UTC │
	│ start   │ -p test-preload-dl-gcs-022654 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=docker       │ test-preload-dl-gcs-022654    │ jenkins │ v1.37.0 │ 22 Dec 25 23:59 UTC │                     │
	│ delete  │ -p test-preload-dl-gcs-022654                                                                                                                                                  │ test-preload-dl-gcs-022654    │ jenkins │ v1.37.0 │ 23 Dec 25 00:00 UTC │ 23 Dec 25 00:00 UTC │
	│ start   │ -p test-preload-dl-github-988233 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=docker │ test-preload-dl-github-988233 │ jenkins │ v1.37.0 │ 23 Dec 25 00:00 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/23 00:00:13
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1223 00:00:13.555957  684961 out.go:360] Setting OutFile to fd 1 ...
	I1223 00:00:13.556267  684961 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1223 00:00:13.556278  684961 out.go:374] Setting ErrFile to fd 2...
	I1223 00:00:13.556282  684961 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1223 00:00:13.556571  684961 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-72233/.minikube/bin
	I1223 00:00:13.557269  684961 out.go:368] Setting JSON to false
	I1223 00:00:13.558568  684961 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":13354,"bootTime":1766434660,"procs":274,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1223 00:00:13.558647  684961 start.go:143] virtualization: kvm guest
	I1223 00:00:13.559511  684961 out.go:179] * [test-preload-dl-github-988233] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1223 00:00:13.561485  684961 out.go:179]   - MINIKUBE_LOCATION=22301
	I1223 00:00:13.561542  684961 notify.go:221] Checking for updates...
	I1223 00:00:13.565144  684961 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1223 00:00:13.566765  684961 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1223 00:00:13.568112  684961 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-72233/.minikube
	I1223 00:00:13.569271  684961 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1223 00:00:13.570917  684961 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1223 00:00:13.572430  684961 config.go:182] Loaded profile config "kubenet-003676": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1223 00:00:13.572563  684961 config.go:182] Loaded profile config "newest-cni-348344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1223 00:00:13.572712  684961 config.go:182] Loaded profile config "no-preload-063943": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1223 00:00:13.572860  684961 driver.go:422] Setting default libvirt URI to qemu:///system
	I1223 00:00:13.600795  684961 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1223 00:00:13.600975  684961 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1223 00:00:13.671973  684961 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:true NGoroutines:75 SystemTime:2025-12-23 00:00:13.661354709 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1223 00:00:13.672087  684961 docker.go:319] overlay module found
	I1223 00:00:13.674010  684961 out.go:179] * Using the docker driver based on user configuration
	I1223 00:00:13.674931  684961 start.go:309] selected driver: docker
	I1223 00:00:13.674943  684961 start.go:928] validating driver "docker" against <nil>
	I1223 00:00:13.675027  684961 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1223 00:00:13.740234  684961 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:true NGoroutines:75 SystemTime:2025-12-23 00:00:13.727239331 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1223 00:00:13.740452  684961 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1223 00:00:13.741135  684961 start_flags.go:417] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1223 00:00:13.741336  684961 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1223 00:00:13.747362  684961 out.go:179] * Using Docker driver with root privileges
	I1223 00:00:13.748481  684961 cni.go:84] Creating CNI manager for ""
	I1223 00:00:13.748565  684961 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1223 00:00:13.748576  684961 start_flags.go:342] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1223 00:00:13.748675  684961 start.go:353] cluster config:
	{Name:test-preload-dl-github-988233 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-rc.2 ClusterName:test-preload-dl-github-988233 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNS
Domain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rose
tta:false}
	I1223 00:00:13.749952  684961 out.go:179] * Starting "test-preload-dl-github-988233" primary control-plane node in "test-preload-dl-github-988233" cluster
	I1223 00:00:13.751076  684961 cache.go:134] Beginning downloading kic base image for docker with docker
	I1223 00:00:13.752175  684961 out.go:179] * Pulling base image v0.0.48-1766394456-22288 ...
	I1223 00:00:13.753246  684961 preload.go:188] Checking if preload exists for k8s version v1.34.0-rc.2 and runtime docker
	I1223 00:00:13.753304  684961 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local docker daemon
	I1223 00:00:13.778891  684961 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local docker daemon, skipping pull
	I1223 00:00:13.778919  684961 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 to local cache
	I1223 00:00:13.779004  684961 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local cache directory
	I1223 00:00:13.779016  684961 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local cache directory, skipping pull
	I1223 00:00:13.779020  684961 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 exists in cache, skipping pull
	I1223 00:00:13.779027  684961 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 as a tarball
	I1223 00:00:14.319209  684961 preload.go:148] Found remote preload: https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.34.0-rc.2-docker-overlay2-amd64.tar.lz4
	I1223 00:00:14.319256  684961 cache.go:65] Caching tarball of preloaded images
	I1223 00:00:14.319477  684961 preload.go:188] Checking if preload exists for k8s version v1.34.0-rc.2 and runtime docker
	I1223 00:00:14.322005  684961 out.go:179] * Downloading Kubernetes v1.34.0-rc.2 preload ...
	W1223 00:00:13.998171  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:15.998341  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:00:14.323013  684961 preload.go:269] Downloading preload from https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.34.0-rc.2-docker-overlay2-amd64.tar.lz4
	I1223 00:00:14.323027  684961 preload.go:347] getting checksum for preloaded-images-k8s-v18-v1.34.0-rc.2-docker-overlay2-amd64.tar.lz4 from github api...
	I1223 00:00:15.099232  684961 preload.go:316] Got checksum from Github API "82d257fa9444cfd5aa459a096645048379338c81d9950868972370534cdfb59a"
	I1223 00:00:15.099279  684961 download.go:114] Downloading: https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.34.0-rc.2-docker-overlay2-amd64.tar.lz4?checksum=sha256:82d257fa9444cfd5aa459a096645048379338c81d9950868972370534cdfb59a -> /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-rc.2-docker-overlay2-amd64.tar.lz4
	I1223 00:00:19.514174  679852 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1223 00:00:19.514240  679852 kubeadm.go:319] [preflight] Running pre-flight checks
	I1223 00:00:19.514308  679852 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1223 00:00:19.514388  679852 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1223 00:00:19.514429  679852 kubeadm.go:319] OS: Linux
	I1223 00:00:19.514495  679852 kubeadm.go:319] CGROUPS_CPU: enabled
	I1223 00:00:19.514565  679852 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1223 00:00:19.514666  679852 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1223 00:00:19.514778  679852 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1223 00:00:19.514856  679852 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1223 00:00:19.514933  679852 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1223 00:00:19.514991  679852 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1223 00:00:19.515072  679852 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1223 00:00:19.515147  679852 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1223 00:00:19.515278  679852 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1223 00:00:19.515415  679852 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1223 00:00:19.515544  679852 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1223 00:00:19.515652  679852 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1223 00:00:19.517022  679852 out.go:252]   - Generating certificates and keys ...
	I1223 00:00:19.517087  679852 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1223 00:00:19.517148  679852 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1223 00:00:19.517224  679852 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1223 00:00:19.517324  679852 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1223 00:00:19.517388  679852 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1223 00:00:19.517455  679852 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1223 00:00:19.517499  679852 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1223 00:00:19.517629  679852 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kubenet-003676 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1223 00:00:19.517675  679852 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1223 00:00:19.517812  679852 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kubenet-003676 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1223 00:00:19.517892  679852 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1223 00:00:19.518000  679852 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1223 00:00:19.518045  679852 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1223 00:00:19.518097  679852 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1223 00:00:19.518145  679852 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1223 00:00:19.518226  679852 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1223 00:00:19.518308  679852 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1223 00:00:19.518402  679852 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1223 00:00:19.518488  679852 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1223 00:00:19.518577  679852 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1223 00:00:19.518710  679852 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1223 00:00:19.519688  679852 out.go:252]   - Booting up control plane ...
	I1223 00:00:19.519787  679852 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1223 00:00:19.519889  679852 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1223 00:00:19.519968  679852 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1223 00:00:19.520095  679852 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1223 00:00:19.520189  679852 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1223 00:00:19.520280  679852 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1223 00:00:19.520351  679852 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1223 00:00:19.520384  679852 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1223 00:00:19.520507  679852 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1223 00:00:19.520636  679852 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1223 00:00:19.520735  679852 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001121364s
	I1223 00:00:19.520870  679852 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1223 00:00:19.520981  679852 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1223 00:00:19.521070  679852 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1223 00:00:19.521169  679852 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1223 00:00:19.521264  679852 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.589822453s
	I1223 00:00:19.521370  679852 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.019557336s
	I1223 00:00:19.521432  679852 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.501264939s
	I1223 00:00:19.521569  679852 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1223 00:00:19.521755  679852 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1223 00:00:19.521843  679852 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1223 00:00:19.522123  679852 kubeadm.go:319] [mark-control-plane] Marking the node kubenet-003676 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1223 00:00:19.522209  679852 kubeadm.go:319] [bootstrap-token] Using token: hrc9kb.p1y6ntzftoumh097
	W1223 00:00:18.498068  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:20.997931  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:00:19.523407  679852 out.go:252]   - Configuring RBAC rules ...
	I1223 00:00:19.523552  679852 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1223 00:00:19.523703  679852 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1223 00:00:19.523871  679852 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1223 00:00:19.523977  679852 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1223 00:00:19.524092  679852 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1223 00:00:19.524165  679852 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1223 00:00:19.524260  679852 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1223 00:00:19.524302  679852 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1223 00:00:19.524351  679852 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1223 00:00:19.524361  679852 kubeadm.go:319] 
	I1223 00:00:19.524419  679852 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1223 00:00:19.524425  679852 kubeadm.go:319] 
	I1223 00:00:19.524484  679852 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1223 00:00:19.524490  679852 kubeadm.go:319] 
	I1223 00:00:19.524509  679852 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1223 00:00:19.524562  679852 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1223 00:00:19.524625  679852 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1223 00:00:19.524634  679852 kubeadm.go:319] 
	I1223 00:00:19.524693  679852 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1223 00:00:19.524699  679852 kubeadm.go:319] 
	I1223 00:00:19.524735  679852 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1223 00:00:19.524743  679852 kubeadm.go:319] 
	I1223 00:00:19.524783  679852 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1223 00:00:19.524842  679852 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1223 00:00:19.524906  679852 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1223 00:00:19.524913  679852 kubeadm.go:319] 
	I1223 00:00:19.524982  679852 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1223 00:00:19.525052  679852 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1223 00:00:19.525060  679852 kubeadm.go:319] 
	I1223 00:00:19.525132  679852 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token hrc9kb.p1y6ntzftoumh097 \
	I1223 00:00:19.525219  679852 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:5443640003ad4b7907b37ec274fbd08911a8cee85cf5083e20215ea2b9d82bbc \
	I1223 00:00:19.525238  679852 kubeadm.go:319] 	--control-plane 
	I1223 00:00:19.525241  679852 kubeadm.go:319] 
	I1223 00:00:19.525319  679852 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1223 00:00:19.525332  679852 kubeadm.go:319] 
	I1223 00:00:19.525401  679852 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token hrc9kb.p1y6ntzftoumh097 \
	I1223 00:00:19.525508  679852 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:5443640003ad4b7907b37ec274fbd08911a8cee85cf5083e20215ea2b9d82bbc 
	I1223 00:00:19.525519  679852 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1223 00:00:19.525535  679852 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1223 00:00:19.525646  679852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1223 00:00:19.525686  679852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kubenet-003676 minikube.k8s.io/updated_at=2025_12_23T00_00_19_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=97c570e2878f345404332f23c78ab0f60732b01b minikube.k8s.io/name=kubenet-003676 minikube.k8s.io/primary=true
	I1223 00:00:19.538391  679852 ops.go:34] apiserver oom_adj: -16
	I1223 00:00:19.639019  679852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1223 00:00:20.139627  679852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1223 00:00:20.639793  679852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1223 00:00:21.139413  679852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1223 00:00:21.639169  679852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1223 00:00:22.139259  679852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1223 00:00:22.639374  679852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1223 00:00:23.139623  679852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1223 00:00:23.639691  679852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1223 00:00:23.716645  679852 kubeadm.go:1114] duration metric: took 4.191065723s to wait for elevateKubeSystemPrivileges
	I1223 00:00:23.716681  679852 kubeadm.go:403] duration metric: took 15.978322548s to StartCluster
	I1223 00:00:23.716698  679852 settings.go:142] acquiring lock: {Name:mk05aa406dacdbba79fec0b7e7f355491ea46bf8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1223 00:00:23.716776  679852 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1223 00:00:23.718026  679852 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/kubeconfig: {Name:mkabb5ea92c3fe748f610038efb5c58128364c71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1223 00:00:23.718298  679852 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1223 00:00:23.718322  679852 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1223 00:00:23.718377  679852 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1223 00:00:23.718480  679852 addons.go:70] Setting storage-provisioner=true in profile "kubenet-003676"
	I1223 00:00:23.718502  679852 addons.go:70] Setting default-storageclass=true in profile "kubenet-003676"
	I1223 00:00:23.718517  679852 addons.go:239] Setting addon storage-provisioner=true in "kubenet-003676"
	I1223 00:00:23.718549  679852 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kubenet-003676"
	I1223 00:00:23.718558  679852 host.go:66] Checking if "kubenet-003676" exists ...
	I1223 00:00:23.718505  679852 config.go:182] Loaded profile config "kubenet-003676": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1223 00:00:23.719129  679852 cli_runner.go:164] Run: docker container inspect kubenet-003676 --format={{.State.Status}}
	I1223 00:00:23.719153  679852 cli_runner.go:164] Run: docker container inspect kubenet-003676 --format={{.State.Status}}
	I1223 00:00:23.720624  679852 out.go:179] * Verifying Kubernetes components...
	I1223 00:00:23.721928  679852 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1223 00:00:23.744798  679852 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1223 00:00:23.745356  679852 addons.go:239] Setting addon default-storageclass=true in "kubenet-003676"
	I1223 00:00:23.745408  679852 host.go:66] Checking if "kubenet-003676" exists ...
	I1223 00:00:23.745891  679852 cli_runner.go:164] Run: docker container inspect kubenet-003676 --format={{.State.Status}}
	I1223 00:00:23.748794  679852 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1223 00:00:23.748822  679852 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1223 00:00:23.748885  679852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-003676
	I1223 00:00:23.773430  679852 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1223 00:00:23.773466  679852 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1223 00:00:23.773533  679852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-003676
	I1223 00:00:23.782852  679852 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/kubenet-003676/id_rsa Username:docker}
	I1223 00:00:23.800560  679852 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/kubenet-003676/id_rsa Username:docker}
	I1223 00:00:23.858443  679852 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1223 00:00:24.036889  679852 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1223 00:00:24.040885  679852 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1223 00:00:24.041183  679852 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1223 00:00:24.432922  679852 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1223 00:00:24.434214  679852 node_ready.go:35] waiting up to 15m0s for node "kubenet-003676" to be "Ready" ...
	I1223 00:00:24.443680  679852 node_ready.go:49] node "kubenet-003676" is "Ready"
	I1223 00:00:24.443709  679852 node_ready.go:38] duration metric: took 9.46831ms for node "kubenet-003676" to be "Ready" ...
	I1223 00:00:24.443757  679852 api_server.go:52] waiting for apiserver process to appear ...
	I1223 00:00:24.443821  679852 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:24.773908  679852 api_server.go:72] duration metric: took 1.055571408s to wait for apiserver process to appear ...
	I1223 00:00:24.773934  679852 api_server.go:88] waiting for apiserver healthz status ...
	I1223 00:00:24.773957  679852 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1223 00:00:24.825663  679852 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1223 00:00:24.826945  679852 api_server.go:141] control plane version: v1.34.3
	I1223 00:00:24.826981  679852 api_server.go:131] duration metric: took 53.037599ms to wait for apiserver health ...
	I1223 00:00:24.826995  679852 system_pods.go:43] waiting for kube-system pods to appear ...
	I1223 00:00:24.827505  679852 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1223 00:00:23.498144  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:25.997511  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:00:24.829276  679852 addons.go:530] duration metric: took 1.110901556s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1223 00:00:24.829989  679852 system_pods.go:59] 8 kube-system pods found
	I1223 00:00:24.830021  679852 system_pods.go:61] "coredns-66bc5c9577-blbgm" [76efde56-a055-4559-b815-d14f5f6a67f0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1223 00:00:24.830032  679852 system_pods.go:61] "coredns-66bc5c9577-v4sr7" [77bc6a0c-6bea-4aa3-bd73-f390233b4766] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1223 00:00:24.830041  679852 system_pods.go:61] "etcd-kubenet-003676" [0b560744-32b9-41fd-b1b3-766cfcd30f11] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1223 00:00:24.830051  679852 system_pods.go:61] "kube-apiserver-kubenet-003676" [308a57dc-4999-47b4-96a5-b98c29ecef2d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1223 00:00:24.830059  679852 system_pods.go:61] "kube-controller-manager-kubenet-003676" [8d2f0882-47dd-44f9-9301-b4e86587562d] Running
	I1223 00:00:24.830067  679852 system_pods.go:61] "kube-proxy-4ftjm" [0a332404-e690-4c7f-bd5f-a89cc26c4aca] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1223 00:00:24.830076  679852 system_pods.go:61] "kube-scheduler-kubenet-003676" [99c2fc70-a235-42a2-9f70-7f7ff71cd8ed] Running
	I1223 00:00:24.830082  679852 system_pods.go:61] "storage-provisioner" [a33bcf34-4580-4416-8cee-9ac58aa89add] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1223 00:00:24.830088  679852 system_pods.go:74] duration metric: took 3.08689ms to wait for pod list to return data ...
	I1223 00:00:24.830100  679852 default_sa.go:34] waiting for default service account to be created ...
	I1223 00:00:24.832266  679852 default_sa.go:45] found service account: "default"
	I1223 00:00:24.832289  679852 default_sa.go:55] duration metric: took 2.181302ms for default service account to be created ...
	I1223 00:00:24.832298  679852 system_pods.go:116] waiting for k8s-apps to be running ...
	I1223 00:00:24.834885  679852 system_pods.go:86] 8 kube-system pods found
	I1223 00:00:24.834921  679852 system_pods.go:89] "coredns-66bc5c9577-blbgm" [76efde56-a055-4559-b815-d14f5f6a67f0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1223 00:00:24.834932  679852 system_pods.go:89] "coredns-66bc5c9577-v4sr7" [77bc6a0c-6bea-4aa3-bd73-f390233b4766] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1223 00:00:24.834942  679852 system_pods.go:89] "etcd-kubenet-003676" [0b560744-32b9-41fd-b1b3-766cfcd30f11] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1223 00:00:24.834955  679852 system_pods.go:89] "kube-apiserver-kubenet-003676" [308a57dc-4999-47b4-96a5-b98c29ecef2d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1223 00:00:24.834962  679852 system_pods.go:89] "kube-controller-manager-kubenet-003676" [8d2f0882-47dd-44f9-9301-b4e86587562d] Running
	I1223 00:00:24.834969  679852 system_pods.go:89] "kube-proxy-4ftjm" [0a332404-e690-4c7f-bd5f-a89cc26c4aca] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1223 00:00:24.834979  679852 system_pods.go:89] "kube-scheduler-kubenet-003676" [99c2fc70-a235-42a2-9f70-7f7ff71cd8ed] Running
	I1223 00:00:24.834988  679852 system_pods.go:89] "storage-provisioner" [a33bcf34-4580-4416-8cee-9ac58aa89add] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1223 00:00:24.835016  679852 retry.go:84] will retry after 300ms: missing components: kube-dns, kube-proxy
	I1223 00:00:24.936659  679852 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kubenet-003676" context rescaled to 1 replicas
	I1223 00:00:25.099271  679852 system_pods.go:86] 8 kube-system pods found
	I1223 00:00:25.099316  679852 system_pods.go:89] "coredns-66bc5c9577-blbgm" [76efde56-a055-4559-b815-d14f5f6a67f0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1223 00:00:25.099325  679852 system_pods.go:89] "coredns-66bc5c9577-v4sr7" [77bc6a0c-6bea-4aa3-bd73-f390233b4766] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1223 00:00:25.099335  679852 system_pods.go:89] "etcd-kubenet-003676" [0b560744-32b9-41fd-b1b3-766cfcd30f11] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1223 00:00:25.099344  679852 system_pods.go:89] "kube-apiserver-kubenet-003676" [308a57dc-4999-47b4-96a5-b98c29ecef2d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1223 00:00:25.099353  679852 system_pods.go:89] "kube-controller-manager-kubenet-003676" [8d2f0882-47dd-44f9-9301-b4e86587562d] Running
	I1223 00:00:25.099374  679852 system_pods.go:89] "kube-proxy-4ftjm" [0a332404-e690-4c7f-bd5f-a89cc26c4aca] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1223 00:00:25.099383  679852 system_pods.go:89] "kube-scheduler-kubenet-003676" [99c2fc70-a235-42a2-9f70-7f7ff71cd8ed] Running
	I1223 00:00:25.099391  679852 system_pods.go:89] "storage-provisioner" [a33bcf34-4580-4416-8cee-9ac58aa89add] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1223 00:00:25.423570  679852 system_pods.go:86] 8 kube-system pods found
	I1223 00:00:25.423623  679852 system_pods.go:89] "coredns-66bc5c9577-blbgm" [76efde56-a055-4559-b815-d14f5f6a67f0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1223 00:00:25.423634  679852 system_pods.go:89] "coredns-66bc5c9577-v4sr7" [77bc6a0c-6bea-4aa3-bd73-f390233b4766] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1223 00:00:25.423643  679852 system_pods.go:89] "etcd-kubenet-003676" [0b560744-32b9-41fd-b1b3-766cfcd30f11] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1223 00:00:25.423653  679852 system_pods.go:89] "kube-apiserver-kubenet-003676" [308a57dc-4999-47b4-96a5-b98c29ecef2d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1223 00:00:25.423663  679852 system_pods.go:89] "kube-controller-manager-kubenet-003676" [8d2f0882-47dd-44f9-9301-b4e86587562d] Running
	I1223 00:00:25.423672  679852 system_pods.go:89] "kube-proxy-4ftjm" [0a332404-e690-4c7f-bd5f-a89cc26c4aca] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1223 00:00:25.423680  679852 system_pods.go:89] "kube-scheduler-kubenet-003676" [99c2fc70-a235-42a2-9f70-7f7ff71cd8ed] Running
	I1223 00:00:25.423688  679852 system_pods.go:89] "storage-provisioner" [a33bcf34-4580-4416-8cee-9ac58aa89add] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1223 00:00:25.924040  679852 system_pods.go:86] 8 kube-system pods found
	I1223 00:00:25.924077  679852 system_pods.go:89] "coredns-66bc5c9577-blbgm" [76efde56-a055-4559-b815-d14f5f6a67f0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1223 00:00:25.924088  679852 system_pods.go:89] "coredns-66bc5c9577-v4sr7" [77bc6a0c-6bea-4aa3-bd73-f390233b4766] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1223 00:00:25.924096  679852 system_pods.go:89] "etcd-kubenet-003676" [0b560744-32b9-41fd-b1b3-766cfcd30f11] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1223 00:00:25.924105  679852 system_pods.go:89] "kube-apiserver-kubenet-003676" [308a57dc-4999-47b4-96a5-b98c29ecef2d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1223 00:00:25.924111  679852 system_pods.go:89] "kube-controller-manager-kubenet-003676" [8d2f0882-47dd-44f9-9301-b4e86587562d] Running
	I1223 00:00:25.924119  679852 system_pods.go:89] "kube-proxy-4ftjm" [0a332404-e690-4c7f-bd5f-a89cc26c4aca] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1223 00:00:25.924125  679852 system_pods.go:89] "kube-scheduler-kubenet-003676" [99c2fc70-a235-42a2-9f70-7f7ff71cd8ed] Running
	I1223 00:00:25.924144  679852 system_pods.go:89] "storage-provisioner" [a33bcf34-4580-4416-8cee-9ac58aa89add] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1223 00:00:26.447195  679852 system_pods.go:86] 7 kube-system pods found
	I1223 00:00:26.447230  679852 system_pods.go:89] "coredns-66bc5c9577-v4sr7" [77bc6a0c-6bea-4aa3-bd73-f390233b4766] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1223 00:00:26.447242  679852 system_pods.go:89] "etcd-kubenet-003676" [0b560744-32b9-41fd-b1b3-766cfcd30f11] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1223 00:00:26.447249  679852 system_pods.go:89] "kube-apiserver-kubenet-003676" [308a57dc-4999-47b4-96a5-b98c29ecef2d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1223 00:00:26.447255  679852 system_pods.go:89] "kube-controller-manager-kubenet-003676" [8d2f0882-47dd-44f9-9301-b4e86587562d] Running
	I1223 00:00:26.447263  679852 system_pods.go:89] "kube-proxy-4ftjm" [0a332404-e690-4c7f-bd5f-a89cc26c4aca] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1223 00:00:26.447267  679852 system_pods.go:89] "kube-scheduler-kubenet-003676" [99c2fc70-a235-42a2-9f70-7f7ff71cd8ed] Running
	I1223 00:00:26.447272  679852 system_pods.go:89] "storage-provisioner" [a33bcf34-4580-4416-8cee-9ac58aa89add] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1223 00:00:26.962293  679852 system_pods.go:86] 7 kube-system pods found
	I1223 00:00:26.962330  679852 system_pods.go:89] "coredns-66bc5c9577-v4sr7" [77bc6a0c-6bea-4aa3-bd73-f390233b4766] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1223 00:00:26.962336  679852 system_pods.go:89] "etcd-kubenet-003676" [0b560744-32b9-41fd-b1b3-766cfcd30f11] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1223 00:00:26.962343  679852 system_pods.go:89] "kube-apiserver-kubenet-003676" [308a57dc-4999-47b4-96a5-b98c29ecef2d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1223 00:00:26.962347  679852 system_pods.go:89] "kube-controller-manager-kubenet-003676" [8d2f0882-47dd-44f9-9301-b4e86587562d] Running
	I1223 00:00:26.962351  679852 system_pods.go:89] "kube-proxy-4ftjm" [0a332404-e690-4c7f-bd5f-a89cc26c4aca] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1223 00:00:26.962354  679852 system_pods.go:89] "kube-scheduler-kubenet-003676" [99c2fc70-a235-42a2-9f70-7f7ff71cd8ed] Running
	I1223 00:00:26.962359  679852 system_pods.go:89] "storage-provisioner" [a33bcf34-4580-4416-8cee-9ac58aa89add] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1223 00:00:27.922382  679852 system_pods.go:86] 7 kube-system pods found
	I1223 00:00:27.922427  679852 system_pods.go:89] "coredns-66bc5c9577-v4sr7" [77bc6a0c-6bea-4aa3-bd73-f390233b4766] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1223 00:00:27.922436  679852 system_pods.go:89] "etcd-kubenet-003676" [0b560744-32b9-41fd-b1b3-766cfcd30f11] Running
	I1223 00:00:27.922449  679852 system_pods.go:89] "kube-apiserver-kubenet-003676" [308a57dc-4999-47b4-96a5-b98c29ecef2d] Running
	I1223 00:00:27.922454  679852 system_pods.go:89] "kube-controller-manager-kubenet-003676" [8d2f0882-47dd-44f9-9301-b4e86587562d] Running
	I1223 00:00:27.922459  679852 system_pods.go:89] "kube-proxy-4ftjm" [0a332404-e690-4c7f-bd5f-a89cc26c4aca] Running
	I1223 00:00:27.922468  679852 system_pods.go:89] "kube-scheduler-kubenet-003676" [99c2fc70-a235-42a2-9f70-7f7ff71cd8ed] Running
	I1223 00:00:27.922473  679852 system_pods.go:89] "storage-provisioner" [a33bcf34-4580-4416-8cee-9ac58aa89add] Running
	I1223 00:00:27.922484  679852 system_pods.go:126] duration metric: took 3.090178465s to wait for k8s-apps to be running ...
	I1223 00:00:27.922494  679852 system_svc.go:44] waiting for kubelet service to be running ....
	I1223 00:00:27.922552  679852 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1223 00:00:27.948015  679852 system_svc.go:56] duration metric: took 25.506716ms WaitForService to wait for kubelet
	I1223 00:00:27.948053  679852 kubeadm.go:587] duration metric: took 4.229720543s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1223 00:00:27.948076  679852 node_conditions.go:102] verifying NodePressure condition ...
	I1223 00:00:27.951212  679852 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1223 00:00:27.951237  679852 node_conditions.go:123] node cpu capacity is 8
	I1223 00:00:27.951253  679852 node_conditions.go:105] duration metric: took 3.171063ms to run NodePressure ...
	I1223 00:00:27.951266  679852 start.go:242] waiting for startup goroutines ...
	I1223 00:00:27.951276  679852 start.go:247] waiting for cluster config update ...
	I1223 00:00:27.951293  679852 start.go:256] writing updated cluster config ...
	I1223 00:00:27.951555  679852 ssh_runner.go:195] Run: rm -f paused
	I1223 00:00:27.955807  679852 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1223 00:00:27.959346  679852 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-v4sr7" in "kube-system" namespace to be "Ready" or be gone ...
	
	
	==> Docker <==
	Dec 22 23:50:52 newest-cni-348344 dockerd[1188]: time="2025-12-22T23:50:52.307858466Z" level=info msg="Restoring containers: start."
	Dec 22 23:50:52 newest-cni-348344 dockerd[1188]: time="2025-12-22T23:50:52.323372868Z" level=info msg="Deleting nftables IPv4 rules" error="exit status 1"
	Dec 22 23:50:52 newest-cni-348344 dockerd[1188]: time="2025-12-22T23:50:52.337128443Z" level=info msg="Deleting nftables IPv6 rules" error="exit status 1"
	Dec 22 23:50:52 newest-cni-348344 dockerd[1188]: time="2025-12-22T23:50:52.852390536Z" level=info msg="Loading containers: done."
	Dec 22 23:50:52 newest-cni-348344 dockerd[1188]: time="2025-12-22T23:50:52.861539478Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 22 23:50:52 newest-cni-348344 dockerd[1188]: time="2025-12-22T23:50:52.861589165Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 22 23:50:52 newest-cni-348344 dockerd[1188]: time="2025-12-22T23:50:52.861664123Z" level=info msg="Initializing buildkit"
	Dec 22 23:50:52 newest-cni-348344 dockerd[1188]: time="2025-12-22T23:50:52.879242339Z" level=info msg="Completed buildkit initialization"
	Dec 22 23:50:52 newest-cni-348344 dockerd[1188]: time="2025-12-22T23:50:52.885778595Z" level=info msg="Daemon has completed initialization"
	Dec 22 23:50:52 newest-cni-348344 dockerd[1188]: time="2025-12-22T23:50:52.885844389Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 22 23:50:52 newest-cni-348344 dockerd[1188]: time="2025-12-22T23:50:52.885923371Z" level=info msg="API listen on [::]:2376"
	Dec 22 23:50:52 newest-cni-348344 dockerd[1188]: time="2025-12-22T23:50:52.885884482Z" level=info msg="API listen on /run/docker.sock"
	Dec 22 23:50:52 newest-cni-348344 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 22 23:50:53 newest-cni-348344 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 22 23:50:53 newest-cni-348344 cri-dockerd[1478]: time="2025-12-22T23:50:53Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 22 23:50:53 newest-cni-348344 cri-dockerd[1478]: time="2025-12-22T23:50:53Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 22 23:50:53 newest-cni-348344 cri-dockerd[1478]: time="2025-12-22T23:50:53Z" level=info msg="Start docker client with request timeout 0s"
	Dec 22 23:50:53 newest-cni-348344 cri-dockerd[1478]: time="2025-12-22T23:50:53Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 22 23:50:53 newest-cni-348344 cri-dockerd[1478]: time="2025-12-22T23:50:53Z" level=info msg="Loaded network plugin cni"
	Dec 22 23:50:53 newest-cni-348344 cri-dockerd[1478]: time="2025-12-22T23:50:53Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 22 23:50:53 newest-cni-348344 cri-dockerd[1478]: time="2025-12-22T23:50:53Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 22 23:50:53 newest-cni-348344 cri-dockerd[1478]: time="2025-12-22T23:50:53Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 22 23:50:53 newest-cni-348344 cri-dockerd[1478]: time="2025-12-22T23:50:53Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 22 23:50:53 newest-cni-348344 cri-dockerd[1478]: time="2025-12-22T23:50:53Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 22 23:50:53 newest-cni-348344 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:00:32.026960   10897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:00:32.027616   10897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:00:32.029303   10897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:00:32.029826   10897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:00:32.031567   10897 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a2 51 50 e3 f8 92 08 06
	[Dec22 23:58] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 46 23 15 c0 f6 66 08 06
	[  +0.000401] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 32 44 b0 85 99 75 08 06
	[  +2.519484] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ca 64 f4 88 60 6a 08 06
	[  +0.000472] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 42 41 81 ba 80 a4 08 06
	[Dec22 23:59] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e 60 1e 9e f0 0c 08 06
	[  +0.088099] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff f6 12 57 26 ed f1 08 06
	[  +5.341024] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 46 24 97 27 5a ed 08 06
	[ +14.537406] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff da 72 df 3b 35 8d 08 06
	[  +0.000388] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 5e 60 1e 9e f0 0c 08 06
	[  +2.465032] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 84 3f 6a 28 22 08 06
	[  +0.000373] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 46 24 97 27 5a ed 08 06
	[Dec23 00:00] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 4e 53 f0 1e af dd 08 06
	
	
	==> kernel <==
	 00:00:32 up  3:42,  0 user,  load average: 2.75, 1.91, 1.70
	Linux newest-cni-348344 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 23 00:00:29 newest-cni-348344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 23 00:00:29 newest-cni-348344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 443.
	Dec 23 00:00:29 newest-cni-348344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 23 00:00:29 newest-cni-348344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 23 00:00:29 newest-cni-348344 kubelet[10732]: E1223 00:00:29.793828   10732 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 23 00:00:29 newest-cni-348344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 23 00:00:29 newest-cni-348344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 23 00:00:30 newest-cni-348344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 444.
	Dec 23 00:00:30 newest-cni-348344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 23 00:00:30 newest-cni-348344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 23 00:00:30 newest-cni-348344 kubelet[10743]: E1223 00:00:30.543325   10743 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 23 00:00:30 newest-cni-348344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 23 00:00:30 newest-cni-348344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 23 00:00:31 newest-cni-348344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 445.
	Dec 23 00:00:31 newest-cni-348344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 23 00:00:31 newest-cni-348344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 23 00:00:31 newest-cni-348344 kubelet[10764]: E1223 00:00:31.300231   10764 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 23 00:00:31 newest-cni-348344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 23 00:00:31 newest-cni-348344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 23 00:00:31 newest-cni-348344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 446.
	Dec 23 00:00:31 newest-cni-348344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 23 00:00:31 newest-cni-348344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 23 00:00:32 newest-cni-348344 kubelet[10903]: E1223 00:00:32.051902   10903 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 23 00:00:32 newest-cni-348344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 23 00:00:32 newest-cni-348344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-348344 -n newest-cni-348344
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-348344 -n newest-cni-348344: exit status 6 (329.515891ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1223 00:00:32.422970  687062 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-348344" does not appear in /home/jenkins/minikube-integration/22301-72233/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "newest-cni-348344" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (93.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (372.9s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-348344 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-rc.1
E1223 00:00:51.097261   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/calico-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p newest-cni-348344 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-rc.1: exit status 105 (6m9.653118292s)

                                                
                                                
-- stdout --
	* [newest-cni-348344] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22301
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22301-72233/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-72233/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "newest-cni-348344" primary control-plane node in "newest-cni-348344" cluster
	* Pulling base image v0.0.48-1766394456-22288 ...
	  - kubeadm.pod-network-cidr=10.42.0.0/16
	* Verifying Kubernetes components...
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image registry.k8s.io/echoserver:1.4
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1223 00:00:34.066824  687772 out.go:360] Setting OutFile to fd 1 ...
	I1223 00:00:34.067051  687772 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1223 00:00:34.067058  687772 out.go:374] Setting ErrFile to fd 2...
	I1223 00:00:34.067063  687772 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1223 00:00:34.067257  687772 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-72233/.minikube/bin
	I1223 00:00:34.067701  687772 out.go:368] Setting JSON to false
	I1223 00:00:34.068753  687772 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":13374,"bootTime":1766434660,"procs":281,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1223 00:00:34.068805  687772 start.go:143] virtualization: kvm guest
	I1223 00:00:34.070524  687772 out.go:179] * [newest-cni-348344] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1223 00:00:34.072192  687772 notify.go:221] Checking for updates...
	I1223 00:00:34.072201  687772 out.go:179]   - MINIKUBE_LOCATION=22301
	I1223 00:00:34.073912  687772 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1223 00:00:34.074996  687772 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1223 00:00:34.076047  687772 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-72233/.minikube
	I1223 00:00:34.077175  687772 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1223 00:00:34.078295  687772 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1223 00:00:34.079882  687772 config.go:182] Loaded profile config "newest-cni-348344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1223 00:00:34.080446  687772 driver.go:422] Setting default libvirt URI to qemu:///system
	I1223 00:00:34.106101  687772 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1223 00:00:34.106213  687772 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1223 00:00:34.161275  687772 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:64 SystemTime:2025-12-23 00:00:34.151129133 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1223 00:00:34.161373  687772 docker.go:319] overlay module found
	I1223 00:00:34.163775  687772 out.go:179] * Using the docker driver based on existing profile
	I1223 00:00:34.164711  687772 start.go:309] selected driver: docker
	I1223 00:00:34.164723  687772 start.go:928] validating driver "docker" against &{Name:newest-cni-348344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-348344 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1223 00:00:34.164829  687772 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1223 00:00:34.165640  687772 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1223 00:00:34.231050  687772 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:64 SystemTime:2025-12-23 00:00:34.221418272 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1223 00:00:34.231362  687772 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1223 00:00:34.231388  687772 cni.go:84] Creating CNI manager for ""
	I1223 00:00:34.231454  687772 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1223 00:00:34.231489  687772 start.go:353] cluster config:
	{Name:newest-cni-348344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-348344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1223 00:00:34.233188  687772 out.go:179] * Starting "newest-cni-348344" primary control-plane node in "newest-cni-348344" cluster
	I1223 00:00:34.234320  687772 cache.go:134] Beginning downloading kic base image for docker with docker
	I1223 00:00:34.235503  687772 out.go:179] * Pulling base image v0.0.48-1766394456-22288 ...
	I1223 00:00:34.236442  687772 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1223 00:00:34.236471  687772 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4
	I1223 00:00:34.236486  687772 cache.go:65] Caching tarball of preloaded images
	I1223 00:00:34.236540  687772 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local docker daemon
	I1223 00:00:34.236575  687772 preload.go:251] Found /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1223 00:00:34.236586  687772 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on docker
	I1223 00:00:34.236749  687772 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/config.json ...
	I1223 00:00:34.256540  687772 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local docker daemon, skipping pull
	I1223 00:00:34.256557  687772 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 exists in daemon, skipping load
	I1223 00:00:34.256572  687772 cache.go:243] Successfully downloaded all kic artifacts
	I1223 00:00:34.256623  687772 start.go:360] acquireMachinesLock for newest-cni-348344: {Name:mk26cd248e0bcd2d8f2e8a824868ba7de6c9c6f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1223 00:00:34.256695  687772 start.go:364] duration metric: took 39.918µs to acquireMachinesLock for "newest-cni-348344"
	I1223 00:00:34.256714  687772 start.go:96] Skipping create...Using existing machine configuration
	I1223 00:00:34.256719  687772 fix.go:54] fixHost starting: 
	I1223 00:00:34.256918  687772 cli_runner.go:164] Run: docker container inspect newest-cni-348344 --format={{.State.Status}}
	I1223 00:00:34.273955  687772 fix.go:112] recreateIfNeeded on newest-cni-348344: state=Stopped err=<nil>
	W1223 00:00:34.273978  687772 fix.go:138] unexpected machine state, will restart: <nil>
	I1223 00:00:34.276035  687772 out.go:252] * Restarting existing docker container for "newest-cni-348344" ...
	I1223 00:00:34.276100  687772 cli_runner.go:164] Run: docker start newest-cni-348344
	I1223 00:00:34.520995  687772 cli_runner.go:164] Run: docker container inspect newest-cni-348344 --format={{.State.Status}}
	I1223 00:00:34.540249  687772 kic.go:430] container "newest-cni-348344" state is running.
	I1223 00:00:34.540736  687772 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-348344
	I1223 00:00:34.560319  687772 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/config.json ...
	I1223 00:00:34.560718  687772 machine.go:94] provisionDockerMachine start ...
	I1223 00:00:34.560825  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:34.581907  687772 main.go:144] libmachine: Using SSH client type: native
	I1223 00:00:34.582194  687772 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33168 <nil> <nil>}
	I1223 00:00:34.582211  687772 main.go:144] libmachine: About to run SSH command:
	hostname
	I1223 00:00:34.583095  687772 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41172->127.0.0.1:33168: read: connection reset by peer
	I1223 00:00:37.726578  687772 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-348344
	
	I1223 00:00:37.726621  687772 ubuntu.go:182] provisioning hostname "newest-cni-348344"
	I1223 00:00:37.726764  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:37.746947  687772 main.go:144] libmachine: Using SSH client type: native
	I1223 00:00:37.747183  687772 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33168 <nil> <nil>}
	I1223 00:00:37.747203  687772 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-348344 && echo "newest-cni-348344" | sudo tee /etc/hostname
	I1223 00:00:37.900818  687772 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-348344
	
	I1223 00:00:37.900900  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:37.919317  687772 main.go:144] libmachine: Using SSH client type: native
	I1223 00:00:37.919561  687772 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33168 <nil> <nil>}
	I1223 00:00:37.919579  687772 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-348344' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-348344/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-348344' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1223 00:00:38.062239  687772 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1223 00:00:38.062284  687772 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22301-72233/.minikube CaCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22301-72233/.minikube}
	I1223 00:00:38.062331  687772 ubuntu.go:190] setting up certificates
	I1223 00:00:38.062344  687772 provision.go:84] configureAuth start
	I1223 00:00:38.062400  687772 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-348344
	I1223 00:00:38.081263  687772 provision.go:143] copyHostCerts
	I1223 00:00:38.081355  687772 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem, removing ...
	I1223 00:00:38.081386  687772 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem
	I1223 00:00:38.081497  687772 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem (1082 bytes)
	I1223 00:00:38.081760  687772 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem, removing ...
	I1223 00:00:38.081785  687772 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem
	I1223 00:00:38.081851  687772 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem (1123 bytes)
	I1223 00:00:38.082007  687772 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem, removing ...
	I1223 00:00:38.082027  687772 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem
	I1223 00:00:38.082101  687772 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem (1679 bytes)
	I1223 00:00:38.082238  687772 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem org=jenkins.newest-cni-348344 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-348344]
	I1223 00:00:38.170695  687772 provision.go:177] copyRemoteCerts
	I1223 00:00:38.170759  687772 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1223 00:00:38.170815  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:38.189123  687772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/newest-cni-348344/id_rsa Username:docker}
	I1223 00:00:38.291920  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1223 00:00:38.309166  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1223 00:00:38.326166  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1223 00:00:38.344564  687772 provision.go:87] duration metric: took 282.199681ms to configureAuth
	I1223 00:00:38.344627  687772 ubuntu.go:206] setting minikube options for container-runtime
	I1223 00:00:38.344911  687772 config.go:182] Loaded profile config "newest-cni-348344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1223 00:00:38.344995  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:38.366260  687772 main.go:144] libmachine: Using SSH client type: native
	I1223 00:00:38.366529  687772 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33168 <nil> <nil>}
	I1223 00:00:38.366545  687772 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1223 00:00:38.510728  687772 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1223 00:00:38.510754  687772 ubuntu.go:71] root file system type: overlay
	I1223 00:00:38.510908  687772 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1223 00:00:38.510979  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:38.532018  687772 main.go:144] libmachine: Using SSH client type: native
	I1223 00:00:38.532329  687772 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33168 <nil> <nil>}
	I1223 00:00:38.532458  687772 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1223 00:00:38.686369  687772 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1223 00:00:38.686443  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:38.704974  687772 main.go:144] libmachine: Using SSH client type: native
	I1223 00:00:38.705187  687772 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33168 <nil> <nil>}
	I1223 00:00:38.705204  687772 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1223 00:00:38.853249  687772 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1223 00:00:38.853285  687772 machine.go:97] duration metric: took 4.292539002s to provisionDockerMachine
	I1223 00:00:38.853303  687772 start.go:293] postStartSetup for "newest-cni-348344" (driver="docker")
	I1223 00:00:38.853321  687772 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1223 00:00:38.853419  687772 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1223 00:00:38.853487  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:38.872421  687772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/newest-cni-348344/id_rsa Username:docker}
	I1223 00:00:38.979494  687772 ssh_runner.go:195] Run: cat /etc/os-release
	I1223 00:00:38.983309  687772 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1223 00:00:38.983340  687772 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1223 00:00:38.983352  687772 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-72233/.minikube/addons for local assets ...
	I1223 00:00:38.983401  687772 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-72233/.minikube/files for local assets ...
	I1223 00:00:38.983478  687772 filesync.go:149] local asset: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem -> 758032.pem in /etc/ssl/certs
	I1223 00:00:38.983566  687772 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1223 00:00:38.991328  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem --> /etc/ssl/certs/758032.pem (1708 bytes)
	I1223 00:00:39.009038  687772 start.go:296] duration metric: took 155.718049ms for postStartSetup
	I1223 00:00:39.009116  687772 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1223 00:00:39.009156  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:39.028095  687772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/newest-cni-348344/id_rsa Username:docker}
	I1223 00:00:39.126878  687772 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1223 00:00:39.131527  687772 fix.go:56] duration metric: took 4.874800463s for fixHost
	I1223 00:00:39.131555  687772 start.go:83] releasing machines lock for "newest-cni-348344", held for 4.874847678s
	I1223 00:00:39.131653  687772 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-348344
	I1223 00:00:39.149572  687772 ssh_runner.go:195] Run: cat /version.json
	I1223 00:00:39.149644  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:39.149684  687772 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1223 00:00:39.149791  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:39.169897  687772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/newest-cni-348344/id_rsa Username:docker}
	I1223 00:00:39.170262  687772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/newest-cni-348344/id_rsa Username:docker}
	I1223 00:00:39.329086  687772 ssh_runner.go:195] Run: systemctl --version
	I1223 00:00:39.336405  687772 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1223 00:00:39.341291  687772 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1223 00:00:39.341348  687772 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1223 00:00:39.349279  687772 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1223 00:00:39.349306  687772 start.go:496] detecting cgroup driver to use...
	I1223 00:00:39.349351  687772 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1223 00:00:39.349509  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1223 00:00:39.363512  687772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1223 00:00:39.372815  687772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1223 00:00:39.381448  687772 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1223 00:00:39.381508  687772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1223 00:00:39.390109  687772 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1223 00:00:39.399162  687772 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1223 00:00:39.408060  687772 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1223 00:00:39.416584  687772 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1223 00:00:39.424711  687772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1223 00:00:39.433432  687772 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1223 00:00:39.442056  687772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1223 00:00:39.450905  687772 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1223 00:00:39.458512  687772 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1223 00:00:39.466991  687772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1223 00:00:39.550442  687772 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1223 00:00:39.632902  687772 start.go:496] detecting cgroup driver to use...
	I1223 00:00:39.632954  687772 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1223 00:00:39.633002  687772 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1223 00:00:39.646974  687772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1223 00:00:39.659240  687772 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1223 00:00:39.675979  687772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1223 00:00:39.688406  687772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1223 00:00:39.700912  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1223 00:00:39.715235  687772 ssh_runner.go:195] Run: which cri-dockerd
	I1223 00:00:39.718945  687772 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1223 00:00:39.726972  687772 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1223 00:00:39.739559  687772 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1223 00:00:39.825824  687772 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1223 00:00:39.909620  687772 docker.go:578] configuring docker to use "cgroupfs" as cgroup driver...
	I1223 00:00:39.909743  687772 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1223 00:00:39.923453  687772 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1223 00:00:39.935401  687772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1223 00:00:40.031388  687772 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1223 00:00:40.779801  687772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1223 00:00:40.794700  687772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1223 00:00:40.807727  687772 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1223 00:00:40.821730  687772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1223 00:00:40.835038  687772 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1223 00:00:40.917554  687772 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1223 00:00:41.006720  687772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1223 00:00:41.090943  687772 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1223 00:00:41.119047  687772 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1223 00:00:41.131277  687772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1223 00:00:41.215437  687772 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1223 00:00:41.283622  687772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1223 00:00:41.298485  687772 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1223 00:00:41.298551  687772 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1223 00:00:41.302554  687772 start.go:564] Will wait 60s for crictl version
	I1223 00:00:41.302645  687772 ssh_runner.go:195] Run: which crictl
	I1223 00:00:41.306307  687772 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1223 00:00:41.332425  687772 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1223 00:00:41.332495  687772 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1223 00:00:41.358777  687772 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1223 00:00:41.385668  687772 out.go:252] * Preparing Kubernetes v1.35.0-rc.1 on Docker 29.1.3 ...
	I1223 00:00:41.385749  687772 cli_runner.go:164] Run: docker network inspect newest-cni-348344 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1223 00:00:41.403696  687772 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1223 00:00:41.407961  687772 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1223 00:00:41.419714  687772 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1223 00:00:41.420647  687772 kubeadm.go:884] updating cluster {Name:newest-cni-348344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-348344 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1223 00:00:41.420848  687772 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1223 00:00:41.420937  687772 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1223 00:00:41.444049  687772 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1223 00:00:41.444075  687772 docker.go:624] Images already preloaded, skipping extraction
	I1223 00:00:41.444128  687772 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1223 00:00:41.466181  687772 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1223 00:00:41.466206  687772 cache_images.go:86] Images are preloaded, skipping loading
	I1223 00:00:41.466215  687772 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0-rc.1 docker true true} ...
	I1223 00:00:41.466312  687772 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-348344 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-348344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1223 00:00:41.466372  687772 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1223 00:00:41.520065  687772 cni.go:84] Creating CNI manager for ""
	I1223 00:00:41.520097  687772 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1223 00:00:41.520116  687772 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1223 00:00:41.520151  687772 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-348344 NodeName:newest-cni-348344 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1223 00:00:41.520280  687772 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-348344"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1223 00:00:41.520348  687772 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1223 00:00:41.529909  687772 binaries.go:51] Found k8s binaries, skipping transfer
	I1223 00:00:41.529987  687772 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1223 00:00:41.538670  687772 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1223 00:00:41.552566  687772 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1223 00:00:41.565766  687772 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1223 00:00:41.578604  687772 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1223 00:00:41.582388  687772 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1223 00:00:41.592239  687772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1223 00:00:41.689716  687772 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1223 00:00:41.716265  687772 certs.go:69] Setting up /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344 for IP: 192.168.94.2
	I1223 00:00:41.716293  687772 certs.go:195] generating shared ca certs ...
	I1223 00:00:41.716315  687772 certs.go:227] acquiring lock for ca certs: {Name:mk952cc8302daab7c0050aedd5db4002f6808128 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1223 00:00:41.716492  687772 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.key
	I1223 00:00:41.716548  687772 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.key
	I1223 00:00:41.716557  687772 certs.go:257] generating profile certs ...
	I1223 00:00:41.716731  687772 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/client.key
	I1223 00:00:41.716814  687772 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/apiserver.key.3654ac73
	I1223 00:00:41.716864  687772 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/proxy-client.key
	I1223 00:00:41.716992  687772 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803.pem (1338 bytes)
	W1223 00:00:41.717032  687772 certs.go:480] ignoring /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803_empty.pem, impossibly tiny 0 bytes
	I1223 00:00:41.717041  687772 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem (1675 bytes)
	I1223 00:00:41.717076  687772 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem (1082 bytes)
	I1223 00:00:41.717110  687772 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem (1123 bytes)
	I1223 00:00:41.717142  687772 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem (1679 bytes)
	I1223 00:00:41.717206  687772 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem (1708 bytes)
	I1223 00:00:41.718210  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1223 00:00:41.739858  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1223 00:00:41.759304  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1223 00:00:41.777433  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1223 00:00:41.794878  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1223 00:00:41.811695  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1223 00:00:41.829786  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1223 00:00:41.846666  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1223 00:00:41.863546  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem --> /usr/share/ca-certificates/758032.pem (1708 bytes)
	I1223 00:00:41.880762  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1223 00:00:41.898275  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803.pem --> /usr/share/ca-certificates/75803.pem (1338 bytes)
	I1223 00:00:41.915259  687772 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1223 00:00:41.927541  687772 ssh_runner.go:195] Run: openssl version
	I1223 00:00:41.933577  687772 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1223 00:00:41.940758  687772 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1223 00:00:41.948346  687772 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1223 00:00:41.952096  687772 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 22:33 /usr/share/ca-certificates/minikubeCA.pem
	I1223 00:00:41.952140  687772 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1223 00:00:41.987400  687772 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1223 00:00:41.995158  687772 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/75803.pem
	I1223 00:00:42.002657  687772 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/75803.pem /etc/ssl/certs/75803.pem
	I1223 00:00:42.010346  687772 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75803.pem
	I1223 00:00:42.014050  687772 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 22:42 /usr/share/ca-certificates/75803.pem
	I1223 00:00:42.014097  687772 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75803.pem
	I1223 00:00:42.048067  687772 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1223 00:00:42.055784  687772 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/758032.pem
	I1223 00:00:42.063393  687772 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/758032.pem /etc/ssl/certs/758032.pem
	I1223 00:00:42.071081  687772 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/758032.pem
	I1223 00:00:42.075446  687772 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 22:42 /usr/share/ca-certificates/758032.pem
	I1223 00:00:42.075515  687772 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/758032.pem
	I1223 00:00:42.110438  687772 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1223 00:00:42.118365  687772 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1223 00:00:42.122225  687772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1223 00:00:42.157294  687772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1223 00:00:42.192810  687772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1223 00:00:42.226497  687772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1223 00:00:42.261017  687772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1223 00:00:42.299910  687772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1223 00:00:42.335335  687772 kubeadm.go:401] StartCluster: {Name:newest-cni-348344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-348344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1223 00:00:42.335492  687772 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1223 00:00:42.355576  687772 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1223 00:00:42.364792  687772 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1223 00:00:42.364813  687772 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1223 00:00:42.364868  687772 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1223 00:00:42.373141  687772 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1223 00:00:42.374088  687772 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-348344" does not appear in /home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1223 00:00:42.374674  687772 kubeconfig.go:62] /home/jenkins/minikube-integration/22301-72233/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-348344" cluster setting kubeconfig missing "newest-cni-348344" context setting]
	I1223 00:00:42.375665  687772 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/kubeconfig: {Name:mkabb5ea92c3fe748f610038efb5c58128364c71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1223 00:00:42.377667  687772 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1223 00:00:42.385784  687772 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1223 00:00:42.385817  687772 kubeadm.go:602] duration metric: took 20.99658ms to restartPrimaryControlPlane
	I1223 00:00:42.385829  687772 kubeadm.go:403] duration metric: took 50.508354ms to StartCluster
	I1223 00:00:42.385848  687772 settings.go:142] acquiring lock: {Name:mk05aa406dacdbba79fec0b7e7f355491ea46bf8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1223 00:00:42.385918  687772 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1223 00:00:42.387577  687772 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/kubeconfig: {Name:mkabb5ea92c3fe748f610038efb5c58128364c71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1223 00:00:42.387909  687772 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1223 00:00:42.388038  687772 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1223 00:00:42.388131  687772 config.go:182] Loaded profile config "newest-cni-348344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1223 00:00:42.388136  687772 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-348344"
	I1223 00:00:42.388154  687772 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-348344"
	I1223 00:00:42.388170  687772 addons.go:70] Setting dashboard=true in profile "newest-cni-348344"
	I1223 00:00:42.388189  687772 addons.go:70] Setting default-storageclass=true in profile "newest-cni-348344"
	I1223 00:00:42.388208  687772 addons.go:239] Setting addon dashboard=true in "newest-cni-348344"
	W1223 00:00:42.388222  687772 addons.go:248] addon dashboard should already be in state true
	I1223 00:00:42.388226  687772 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-348344"
	I1223 00:00:42.388262  687772 host.go:66] Checking if "newest-cni-348344" exists ...
	I1223 00:00:42.388184  687772 host.go:66] Checking if "newest-cni-348344" exists ...
	I1223 00:00:42.388611  687772 cli_runner.go:164] Run: docker container inspect newest-cni-348344 --format={{.State.Status}}
	I1223 00:00:42.388764  687772 cli_runner.go:164] Run: docker container inspect newest-cni-348344 --format={{.State.Status}}
	I1223 00:00:42.388771  687772 cli_runner.go:164] Run: docker container inspect newest-cni-348344 --format={{.State.Status}}
	I1223 00:00:42.390679  687772 out.go:179] * Verifying Kubernetes components...
	I1223 00:00:42.391709  687772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1223 00:00:42.411960  687772 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1223 00:00:42.411964  687772 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1223 00:00:42.412088  687772 addons.go:239] Setting addon default-storageclass=true in "newest-cni-348344"
	I1223 00:00:42.412122  687772 host.go:66] Checking if "newest-cni-348344" exists ...
	I1223 00:00:42.412456  687772 cli_runner.go:164] Run: docker container inspect newest-cni-348344 --format={{.State.Status}}
	I1223 00:00:42.413023  687772 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1223 00:00:42.413043  687772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1223 00:00:42.413094  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:42.416720  687772 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1223 00:00:42.418014  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1223 00:00:42.418038  687772 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1223 00:00:42.418101  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:42.435878  687772 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1223 00:00:42.435898  687772 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1223 00:00:42.435951  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:42.437317  687772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/newest-cni-348344/id_rsa Username:docker}
	I1223 00:00:42.440138  687772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/newest-cni-348344/id_rsa Username:docker}
	I1223 00:00:42.468807  687772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/newest-cni-348344/id_rsa Username:docker}
	I1223 00:00:42.547260  687772 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1223 00:00:42.600477  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1223 00:00:42.600501  687772 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1223 00:00:42.601363  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1223 00:00:42.607651  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1223 00:00:42.614184  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1223 00:00:42.614204  687772 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1223 00:00:42.628125  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1223 00:00:42.628154  687772 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1223 00:00:42.643093  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1223 00:00:42.643121  687772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1223 00:00:42.702024  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1223 00:00:42.702052  687772 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1223 00:00:42.716211  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1223 00:00:42.716238  687772 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1223 00:00:42.729547  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1223 00:00:42.729569  687772 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1223 00:00:42.742218  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1223 00:00:42.742246  687772 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1223 00:00:42.754716  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1223 00:00:42.754741  687772 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1223 00:00:42.767403  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1223 00:00:43.251127  687772 api_server.go:52] waiting for apiserver process to appear ...
	W1223 00:00:43.251190  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1223 00:00:43.251231  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:43.251243  687772 retry.go:84] will retry after 100ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:43.251206  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:00:43.251536  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:43.392258  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:00:43.445582  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:43.478824  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1223 00:00:43.509277  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:00:43.537617  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1223 00:00:43.565023  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:43.661224  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:00:43.715310  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:43.751478  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:44.004029  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:00:44.062136  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:44.088452  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:00:44.143262  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:44.251335  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:44.383985  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1223 00:00:44.396693  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:00:44.439768  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1223 00:00:44.455552  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:44.752205  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:44.902917  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:00:44.962468  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:45.251805  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:45.326701  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:00:45.400433  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:45.424647  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:00:45.480956  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:45.751310  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:45.799387  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:00:45.854148  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:46.251658  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:46.752208  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:46.836006  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:00:46.890186  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:46.995326  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:00:47.057423  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:47.251702  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:47.426474  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:00:47.485952  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:47.752131  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:48.207567  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1223 00:00:48.251315  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:00:48.262862  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:48.519836  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:00:48.578806  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:48.752112  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:49.251876  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:49.751844  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:49.890160  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:00:49.947020  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:50.066047  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:00:50.120129  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:50.251336  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:50.558241  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:00:50.615271  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:50.751572  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:51.252351  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:51.751913  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:52.251786  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:52.752327  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:52.791985  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:00:52.850108  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:53.251446  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:53.751646  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:54.252244  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:54.751444  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:55.251848  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:55.347206  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:00:55.401615  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:55.751830  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:55.936027  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:00:55.995088  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:56.251432  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:56.751824  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:57.111686  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:00:57.169648  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:57.251735  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:57.751676  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:58.251279  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:58.751377  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:59.251660  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:59.751533  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:00.252079  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:00.347519  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:01:00.405959  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:00.406017  687772 retry.go:84] will retry after 5.4s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:00.564176  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:01:00.620480  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:00.751684  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:01.251318  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:01.751824  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:02.252159  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:02.751813  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:03.251679  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:03.752247  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:04.252308  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:04.751451  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:05.252117  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:05.751808  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:05.759328  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:01:05.824086  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:05.824133  687772 retry.go:84] will retry after 18.8s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:06.105622  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:01:06.162303  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:06.251478  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:06.751336  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:07.251717  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:07.751827  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:08.251532  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:08.751779  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:09.251540  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:09.503324  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:01:09.562697  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:09.562742  687772 retry.go:84] will retry after 15.6s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:09.751986  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:10.251441  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:10.751356  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:11.251389  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:11.752279  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:12.251802  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:12.751324  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:13.251818  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:13.751866  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:14.252139  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:14.751583  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:15.252310  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:15.751625  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:16.251842  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:16.751830  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:17.252084  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:17.751783  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:18.251376  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:18.751964  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:19.252110  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:19.751822  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:20.251704  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:20.568697  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:01:20.639449  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:20.639500  687772 retry.go:84] will retry after 12s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:20.751734  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:21.252249  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:21.751801  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:22.251412  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:22.751366  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:23.252197  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:23.751273  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:24.252133  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:24.590346  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:01:24.662633  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:24.662674  687772 retry.go:84] will retry after 26.5s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:24.751773  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:25.211650  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1223 00:01:25.252110  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:01:25.284581  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:25.752154  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:26.251728  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:26.751953  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:27.251838  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:27.751740  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:28.251778  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:28.752182  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:29.252321  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:29.752290  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:30.251523  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:30.752205  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:31.251957  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:31.751675  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:32.251501  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:32.610650  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:01:32.668272  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:32.668321  687772 retry.go:84] will retry after 25.8s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:32.751294  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:33.251566  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:33.751514  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:34.251717  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:34.751347  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:35.251743  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:35.751789  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:36.251397  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:36.751367  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:37.251392  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:37.751873  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:38.251848  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:38.751798  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:39.251316  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:39.752288  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:40.251339  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:40.751372  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:41.252071  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:41.752308  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:42.252071  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:42.752021  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:01:42.776761  687772 logs.go:282] 0 containers: []
	W1223 00:01:42.776791  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:01:42.776839  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:01:42.797079  687772 logs.go:282] 0 containers: []
	W1223 00:01:42.797110  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:01:42.797168  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:01:42.817812  687772 logs.go:282] 0 containers: []
	W1223 00:01:42.817839  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:01:42.817907  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:01:42.837835  687772 logs.go:282] 0 containers: []
	W1223 00:01:42.837868  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:01:42.837924  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:01:42.858165  687772 logs.go:282] 0 containers: []
	W1223 00:01:42.858188  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:01:42.858231  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:01:42.878211  687772 logs.go:282] 0 containers: []
	W1223 00:01:42.878236  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:01:42.878281  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:01:42.897739  687772 logs.go:282] 0 containers: []
	W1223 00:01:42.897762  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:01:42.897817  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:01:42.918314  687772 logs.go:282] 0 containers: []
	W1223 00:01:42.918340  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:01:42.918352  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:01:42.918362  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:01:42.965734  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:01:42.965766  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:01:42.986555  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:01:42.986585  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:01:43.052069  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:01:43.042198    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:43.042815    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:43.044868    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:43.045810    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:43.047451    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:01:43.042198    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:43.042815    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:43.044868    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:43.045810    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:43.047451    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:01:43.052092  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:01:43.052108  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:01:43.072511  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:01:43.072542  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:01:45.620354  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:45.631856  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:01:45.651448  687772 logs.go:282] 0 containers: []
	W1223 00:01:45.651473  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:01:45.651528  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:01:45.671204  687772 logs.go:282] 0 containers: []
	W1223 00:01:45.671229  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:01:45.671279  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:01:45.690397  687772 logs.go:282] 0 containers: []
	W1223 00:01:45.690418  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:01:45.690470  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:01:45.711316  687772 logs.go:282] 0 containers: []
	W1223 00:01:45.711342  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:01:45.711400  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:01:45.731731  687772 logs.go:282] 0 containers: []
	W1223 00:01:45.731755  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:01:45.731812  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:01:45.751415  687772 logs.go:282] 0 containers: []
	W1223 00:01:45.751442  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:01:45.751500  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:01:45.770135  687772 logs.go:282] 0 containers: []
	W1223 00:01:45.770157  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:01:45.770202  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:01:45.789088  687772 logs.go:282] 0 containers: []
	W1223 00:01:45.789114  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:01:45.789127  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:01:45.789139  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:01:45.819714  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:01:45.819741  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:01:45.867983  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:01:45.868013  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:01:45.888018  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:01:45.888051  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:01:45.945247  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:01:45.937257    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:45.937924    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:45.939479    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:45.940165    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:45.941893    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:01:45.937257    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:45.937924    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:45.939479    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:45.940165    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:45.941893    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:01:45.945270  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:01:45.945286  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:01:47.390289  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:01:47.445750  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:47.445788  687772 retry.go:84] will retry after 18.6s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:48.464795  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:48.476515  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:01:48.499002  687772 logs.go:282] 0 containers: []
	W1223 00:01:48.499028  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:01:48.499082  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:01:48.523130  687772 logs.go:282] 0 containers: []
	W1223 00:01:48.523165  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:01:48.523225  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:01:48.547882  687772 logs.go:282] 0 containers: []
	W1223 00:01:48.547904  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:01:48.547950  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:01:48.567534  687772 logs.go:282] 0 containers: []
	W1223 00:01:48.567560  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:01:48.567627  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:01:48.590197  687772 logs.go:282] 0 containers: []
	W1223 00:01:48.590222  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:01:48.590280  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:01:48.609279  687772 logs.go:282] 0 containers: []
	W1223 00:01:48.609306  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:01:48.609357  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:01:48.628700  687772 logs.go:282] 0 containers: []
	W1223 00:01:48.628731  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:01:48.628795  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:01:48.648378  687772 logs.go:282] 0 containers: []
	W1223 00:01:48.648402  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:01:48.648416  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:01:48.648433  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:01:48.672700  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:01:48.672741  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:01:48.743864  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:01:48.735937    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:48.736663    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:48.737624    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:48.739145    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:48.739683    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:01:48.735937    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:48.736663    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:48.737624    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:48.739145    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:48.739683    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:01:48.743885  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:01:48.743897  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:01:48.762911  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:01:48.762941  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:01:48.793205  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:01:48.793235  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:01:51.128346  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:01:51.182480  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:51.182522  687772 retry.go:84] will retry after 29.5s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:51.341781  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:51.353222  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:01:51.373421  687772 logs.go:282] 0 containers: []
	W1223 00:01:51.373447  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:01:51.373497  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:01:51.392557  687772 logs.go:282] 0 containers: []
	W1223 00:01:51.392613  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:01:51.392675  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:01:51.411939  687772 logs.go:282] 0 containers: []
	W1223 00:01:51.411964  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:01:51.412018  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:01:51.430968  687772 logs.go:282] 0 containers: []
	W1223 00:01:51.430998  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:01:51.431054  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:01:51.451234  687772 logs.go:282] 0 containers: []
	W1223 00:01:51.451266  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:01:51.451329  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:01:51.470904  687772 logs.go:282] 0 containers: []
	W1223 00:01:51.470947  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:01:51.471017  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:01:51.490121  687772 logs.go:282] 0 containers: []
	W1223 00:01:51.490146  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:01:51.490201  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:01:51.511843  687772 logs.go:282] 0 containers: []
	W1223 00:01:51.511875  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:01:51.511891  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:01:51.511906  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:01:51.545106  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:01:51.545138  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:01:51.594541  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:01:51.594576  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:01:51.615455  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:01:51.615485  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:01:51.680061  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:01:51.672760    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:51.673328    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:51.674906    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:51.675511    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:51.677002    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:01:51.672760    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:51.673328    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:51.674906    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:51.675511    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:51.677002    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:01:51.680080  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:01:51.680091  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:01:54.199432  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:54.211004  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:01:54.230469  687772 logs.go:282] 0 containers: []
	W1223 00:01:54.230498  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:01:54.230548  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:01:54.251212  687772 logs.go:282] 0 containers: []
	W1223 00:01:54.251245  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:01:54.251300  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:01:54.274147  687772 logs.go:282] 0 containers: []
	W1223 00:01:54.274177  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:01:54.274238  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:01:54.297381  687772 logs.go:282] 0 containers: []
	W1223 00:01:54.297413  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:01:54.297471  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:01:54.316290  687772 logs.go:282] 0 containers: []
	W1223 00:01:54.316315  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:01:54.316362  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:01:54.335315  687772 logs.go:282] 0 containers: []
	W1223 00:01:54.335339  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:01:54.335393  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:01:54.354058  687772 logs.go:282] 0 containers: []
	W1223 00:01:54.354089  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:01:54.354144  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:01:54.372661  687772 logs.go:282] 0 containers: []
	W1223 00:01:54.372686  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:01:54.372700  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:01:54.372715  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:01:54.417565  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:01:54.417601  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:01:54.438013  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:01:54.438041  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:01:54.497552  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:01:54.488781    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:54.489417    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:54.491051    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:54.491532    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:54.493218    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:01:54.488781    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:54.489417    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:54.491051    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:54.491532    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:54.493218    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:01:54.497575  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:01:54.497589  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:01:54.517495  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:01:54.517523  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:01:57.056037  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:57.067478  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:01:57.087084  687772 logs.go:282] 0 containers: []
	W1223 00:01:57.087114  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:01:57.087183  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:01:57.105193  687772 logs.go:282] 0 containers: []
	W1223 00:01:57.105218  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:01:57.105270  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:01:57.122859  687772 logs.go:282] 0 containers: []
	W1223 00:01:57.122885  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:01:57.122931  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:01:57.141996  687772 logs.go:282] 0 containers: []
	W1223 00:01:57.142021  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:01:57.142074  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:01:57.160005  687772 logs.go:282] 0 containers: []
	W1223 00:01:57.160032  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:01:57.160083  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:01:57.178889  687772 logs.go:282] 0 containers: []
	W1223 00:01:57.178915  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:01:57.178989  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:01:57.196419  687772 logs.go:282] 0 containers: []
	W1223 00:01:57.196446  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:01:57.196498  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:01:57.214764  687772 logs.go:282] 0 containers: []
	W1223 00:01:57.214790  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:01:57.214804  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:01:57.214817  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:01:57.266333  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:01:57.266370  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:01:57.289301  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:01:57.289330  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:01:57.347060  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:01:57.339792    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:57.340363    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:57.341983    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:57.342452    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:57.344014    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:01:57.339792    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:57.340363    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:57.341983    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:57.342452    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:57.344014    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:01:57.347099  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:01:57.347117  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:01:57.370222  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:01:57.370257  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:01:58.466074  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:01:58.519779  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:58.519828  687772 retry.go:84] will retry after 42.4s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:59.898063  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:59.909410  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:01:59.927950  687772 logs.go:282] 0 containers: []
	W1223 00:01:59.927974  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:01:59.928017  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:01:59.946773  687772 logs.go:282] 0 containers: []
	W1223 00:01:59.946800  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:01:59.946861  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:01:59.964419  687772 logs.go:282] 0 containers: []
	W1223 00:01:59.964443  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:01:59.964500  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:01:59.982454  687772 logs.go:282] 0 containers: []
	W1223 00:01:59.982478  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:01:59.982537  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:00.000838  687772 logs.go:282] 0 containers: []
	W1223 00:02:00.000860  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:00.000929  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:00.018673  687772 logs.go:282] 0 containers: []
	W1223 00:02:00.018696  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:00.018747  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:00.035944  687772 logs.go:282] 0 containers: []
	W1223 00:02:00.035973  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:00.036027  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:00.053640  687772 logs.go:282] 0 containers: []
	W1223 00:02:00.053666  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:00.053679  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:00.053700  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:00.098844  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:00.098870  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:00.120198  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:00.120224  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:00.175459  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:00.168568    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:00.169147    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:00.170692    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:00.171078    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:00.172563    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:00.168568    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:00.169147    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:00.170692    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:00.171078    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:00.172563    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:00.175477  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:00.175490  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:00.194106  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:00.194146  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:02.722361  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:02.733721  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:02.753939  687772 logs.go:282] 0 containers: []
	W1223 00:02:02.753963  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:02.754025  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:02.773570  687772 logs.go:282] 0 containers: []
	W1223 00:02:02.773610  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:02.773665  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:02.793427  687772 logs.go:282] 0 containers: []
	W1223 00:02:02.793451  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:02.793514  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:02.812154  687772 logs.go:282] 0 containers: []
	W1223 00:02:02.812183  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:02.812241  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:02.830757  687772 logs.go:282] 0 containers: []
	W1223 00:02:02.830777  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:02.830819  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:02.849140  687772 logs.go:282] 0 containers: []
	W1223 00:02:02.849163  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:02.849206  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:02.867505  687772 logs.go:282] 0 containers: []
	W1223 00:02:02.867529  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:02.867584  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:02.885892  687772 logs.go:282] 0 containers: []
	W1223 00:02:02.885920  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:02.885935  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:02.885950  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:02.933880  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:02.933906  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:02.955273  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:02.955304  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:03.009924  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:03.002806    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:03.003364    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:03.004891    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:03.005360    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:03.006852    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:03.002806    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:03.003364    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:03.004891    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:03.005360    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:03.006852    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:03.009951  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:03.010012  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:03.028798  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:03.028821  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:05.557718  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:05.569769  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:05.588873  687772 logs.go:282] 0 containers: []
	W1223 00:02:05.588899  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:05.588946  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:05.607265  687772 logs.go:282] 0 containers: []
	W1223 00:02:05.607289  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:05.607342  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:05.625761  687772 logs.go:282] 0 containers: []
	W1223 00:02:05.625790  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:05.625860  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:05.643443  687772 logs.go:282] 0 containers: []
	W1223 00:02:05.643464  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:05.643513  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:05.661241  687772 logs.go:282] 0 containers: []
	W1223 00:02:05.661266  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:05.661314  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:05.679744  687772 logs.go:282] 0 containers: []
	W1223 00:02:05.679764  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:05.679805  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:05.697808  687772 logs.go:282] 0 containers: []
	W1223 00:02:05.697831  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:05.697878  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:05.716222  687772 logs.go:282] 0 containers: []
	W1223 00:02:05.716245  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:05.716255  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:05.716269  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:05.772404  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:05.772437  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:05.793610  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:05.793643  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:05.849453  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:05.842499    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:05.842938    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:05.844491    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:05.844916    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:05.846469    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:05.842499    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:05.842938    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:05.844491    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:05.844916    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:05.846469    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:05.849479  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:05.849493  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:05.868250  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:05.868273  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:05.997019  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:02:06.048845  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1223 00:02:06.048961  687772 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1223 00:02:08.398283  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:08.409794  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:08.428854  687772 logs.go:282] 0 containers: []
	W1223 00:02:08.428878  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:08.428927  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:08.447223  687772 logs.go:282] 0 containers: []
	W1223 00:02:08.447248  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:08.447292  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:08.464797  687772 logs.go:282] 0 containers: []
	W1223 00:02:08.464816  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:08.464857  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:08.483364  687772 logs.go:282] 0 containers: []
	W1223 00:02:08.483386  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:08.483449  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:08.501985  687772 logs.go:282] 0 containers: []
	W1223 00:02:08.502014  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:08.502064  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:08.520995  687772 logs.go:282] 0 containers: []
	W1223 00:02:08.521020  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:08.521071  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:08.542089  687772 logs.go:282] 0 containers: []
	W1223 00:02:08.542109  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:08.542149  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:08.560448  687772 logs.go:282] 0 containers: []
	W1223 00:02:08.560468  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:08.560478  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:08.560489  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:08.606044  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:08.606073  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:08.626314  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:08.626339  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:08.681455  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:08.674268    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:08.674839    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:08.676419    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:08.676835    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:08.678358    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:08.674268    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:08.674839    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:08.676419    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:08.676835    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:08.678358    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:08.681477  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:08.681490  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:08.699804  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:08.699830  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:11.230050  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:11.241320  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:11.260234  687772 logs.go:282] 0 containers: []
	W1223 00:02:11.260256  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:11.260301  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:11.279535  687772 logs.go:282] 0 containers: []
	W1223 00:02:11.279558  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:11.279635  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:11.297775  687772 logs.go:282] 0 containers: []
	W1223 00:02:11.297799  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:11.297843  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:11.315756  687772 logs.go:282] 0 containers: []
	W1223 00:02:11.315780  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:11.315823  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:11.333828  687772 logs.go:282] 0 containers: []
	W1223 00:02:11.333850  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:11.333894  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:11.351608  687772 logs.go:282] 0 containers: []
	W1223 00:02:11.351631  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:11.351673  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:11.371003  687772 logs.go:282] 0 containers: []
	W1223 00:02:11.371024  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:11.371065  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:11.388642  687772 logs.go:282] 0 containers: []
	W1223 00:02:11.388662  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:11.388671  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:11.388681  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:11.406664  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:11.406687  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:11.434828  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:11.434851  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:11.481940  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:11.481966  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:11.503051  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:11.503077  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:11.563912  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:11.555767    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:11.556288    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:11.558989    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:11.559407    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:11.560934    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:11.555767    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:11.556288    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:11.558989    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:11.559407    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:11.560934    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:14.064094  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:14.075388  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:14.094051  687772 logs.go:282] 0 containers: []
	W1223 00:02:14.094075  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:14.094123  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:14.112428  687772 logs.go:282] 0 containers: []
	W1223 00:02:14.112454  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:14.112511  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:14.130910  687772 logs.go:282] 0 containers: []
	W1223 00:02:14.130935  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:14.130991  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:14.149172  687772 logs.go:282] 0 containers: []
	W1223 00:02:14.149194  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:14.149247  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:14.167387  687772 logs.go:282] 0 containers: []
	W1223 00:02:14.167414  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:14.167470  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:14.187009  687772 logs.go:282] 0 containers: []
	W1223 00:02:14.187034  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:14.187080  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:14.205514  687772 logs.go:282] 0 containers: []
	W1223 00:02:14.205537  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:14.205604  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:14.223867  687772 logs.go:282] 0 containers: []
	W1223 00:02:14.223893  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:14.223906  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:14.223919  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:14.278850  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:14.272061    5073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:14.272519    5073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:14.274102    5073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:14.274491    5073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:14.275975    5073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:14.272061    5073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:14.272519    5073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:14.274102    5073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:14.274491    5073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:14.275975    5073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:14.278877  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:14.278904  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:14.297791  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:14.297817  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:14.329010  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:14.329035  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:14.375196  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:14.375228  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:16.895760  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:16.908501  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:16.928330  687772 logs.go:282] 0 containers: []
	W1223 00:02:16.928357  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:16.928403  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:16.947248  687772 logs.go:282] 0 containers: []
	W1223 00:02:16.947272  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:16.947319  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:16.967240  687772 logs.go:282] 0 containers: []
	W1223 00:02:16.967266  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:16.967318  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:16.986942  687772 logs.go:282] 0 containers: []
	W1223 00:02:16.986966  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:16.987025  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:17.008674  687772 logs.go:282] 0 containers: []
	W1223 00:02:17.008702  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:17.008760  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:17.030466  687772 logs.go:282] 0 containers: []
	W1223 00:02:17.030492  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:17.030548  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:17.051687  687772 logs.go:282] 0 containers: []
	W1223 00:02:17.051719  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:17.051773  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:17.073457  687772 logs.go:282] 0 containers: []
	W1223 00:02:17.073486  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:17.073502  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:17.073521  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:17.131973  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:17.132010  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:17.157397  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:17.157433  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:17.217639  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:17.209809    5249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:17.210419    5249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:17.212231    5249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:17.212725    5249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:17.214466    5249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:17.209809    5249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:17.210419    5249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:17.212231    5249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:17.212725    5249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:17.214466    5249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:17.217669  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:17.217683  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:17.239498  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:17.239530  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:19.769550  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:19.782360  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:19.802423  687772 logs.go:282] 0 containers: []
	W1223 00:02:19.802446  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:19.802497  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:19.821183  687772 logs.go:282] 0 containers: []
	W1223 00:02:19.821214  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:19.821269  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:19.840343  687772 logs.go:282] 0 containers: []
	W1223 00:02:19.840369  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:19.840426  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:19.857810  687772 logs.go:282] 0 containers: []
	W1223 00:02:19.857835  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:19.857878  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:19.875458  687772 logs.go:282] 0 containers: []
	W1223 00:02:19.875481  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:19.875523  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:19.893840  687772 logs.go:282] 0 containers: []
	W1223 00:02:19.893864  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:19.893916  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:19.912030  687772 logs.go:282] 0 containers: []
	W1223 00:02:19.912053  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:19.912094  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:19.930049  687772 logs.go:282] 0 containers: []
	W1223 00:02:19.930066  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:19.930077  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:19.930088  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:19.976279  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:19.976304  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:19.995814  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:19.995837  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:20.054797  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:20.046679    5408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:20.047269    5408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:20.048969    5408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:20.049570    5408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:20.051329    5408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:20.046679    5408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:20.047269    5408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:20.048969    5408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:20.049570    5408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:20.051329    5408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:20.054819  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:20.054833  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:20.074562  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:20.074588  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:20.651032  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:02:20.702678  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1223 00:02:20.702795  687772 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1223 00:02:22.602868  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:22.614420  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:22.633871  687772 logs.go:282] 0 containers: []
	W1223 00:02:22.633892  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:22.633942  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:22.652376  687772 logs.go:282] 0 containers: []
	W1223 00:02:22.652403  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:22.652454  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:22.670318  687772 logs.go:282] 0 containers: []
	W1223 00:02:22.670340  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:22.670384  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:22.688893  687772 logs.go:282] 0 containers: []
	W1223 00:02:22.688913  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:22.688966  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:22.707579  687772 logs.go:282] 0 containers: []
	W1223 00:02:22.707614  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:22.707667  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:22.726147  687772 logs.go:282] 0 containers: []
	W1223 00:02:22.726174  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:22.726230  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:22.744895  687772 logs.go:282] 0 containers: []
	W1223 00:02:22.744919  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:22.744975  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:22.765807  687772 logs.go:282] 0 containers: []
	W1223 00:02:22.765834  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:22.765848  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:22.765858  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:22.786075  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:22.786111  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:22.814010  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:22.814034  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:22.859717  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:22.859741  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:22.878865  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:22.878889  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:22.933790  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:22.926530    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:22.927059    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:22.928672    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:22.929122    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:22.930663    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:22.926530    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:22.927059    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:22.928672    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:22.929122    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:22.930663    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:25.434500  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:25.446396  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:25.466157  687772 logs.go:282] 0 containers: []
	W1223 00:02:25.466184  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:25.466237  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:25.484799  687772 logs.go:282] 0 containers: []
	W1223 00:02:25.484827  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:25.484899  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:25.503442  687772 logs.go:282] 0 containers: []
	W1223 00:02:25.503470  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:25.503516  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:25.522088  687772 logs.go:282] 0 containers: []
	W1223 00:02:25.522114  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:25.522174  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:25.540899  687772 logs.go:282] 0 containers: []
	W1223 00:02:25.540924  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:25.540979  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:25.559853  687772 logs.go:282] 0 containers: []
	W1223 00:02:25.559877  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:25.559929  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:25.578537  687772 logs.go:282] 0 containers: []
	W1223 00:02:25.578560  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:25.578619  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:25.597442  687772 logs.go:282] 0 containers: []
	W1223 00:02:25.597465  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:25.597476  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:25.597491  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:25.617688  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:25.617718  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:25.672737  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:25.665784    5752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:25.666239    5752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:25.667786    5752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:25.668269    5752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:25.669751    5752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:25.665784    5752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:25.666239    5752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:25.667786    5752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:25.668269    5752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:25.669751    5752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:25.672761  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:25.672777  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:25.691559  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:25.691585  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:25.719893  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:25.719918  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:28.271777  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:28.284248  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:28.304042  687772 logs.go:282] 0 containers: []
	W1223 00:02:28.304069  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:28.304126  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:28.322682  687772 logs.go:282] 0 containers: []
	W1223 00:02:28.322711  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:28.322769  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:28.340899  687772 logs.go:282] 0 containers: []
	W1223 00:02:28.340925  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:28.340974  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:28.359896  687772 logs.go:282] 0 containers: []
	W1223 00:02:28.359922  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:28.359976  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:28.378627  687772 logs.go:282] 0 containers: []
	W1223 00:02:28.378650  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:28.378700  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:28.396793  687772 logs.go:282] 0 containers: []
	W1223 00:02:28.396821  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:28.396870  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:28.415408  687772 logs.go:282] 0 containers: []
	W1223 00:02:28.415434  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:28.415480  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:28.434108  687772 logs.go:282] 0 containers: []
	W1223 00:02:28.434131  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:28.434142  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:28.434153  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:28.462377  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:28.462405  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:28.509046  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:28.509080  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:28.531034  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:28.531065  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:28.587866  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:28.580703    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:28.581207    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:28.582777    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:28.583174    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:28.584683    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:28.580703    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:28.581207    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:28.582777    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:28.583174    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:28.584683    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:28.587904  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:28.587920  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:31.109730  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:31.121215  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:31.140775  687772 logs.go:282] 0 containers: []
	W1223 00:02:31.140799  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:31.140853  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:31.160694  687772 logs.go:282] 0 containers: []
	W1223 00:02:31.160719  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:31.160766  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:31.180064  687772 logs.go:282] 0 containers: []
	W1223 00:02:31.180087  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:31.180133  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:31.198777  687772 logs.go:282] 0 containers: []
	W1223 00:02:31.198802  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:31.198856  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:31.217848  687772 logs.go:282] 0 containers: []
	W1223 00:02:31.217875  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:31.217923  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:31.237167  687772 logs.go:282] 0 containers: []
	W1223 00:02:31.237196  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:31.237251  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:31.257964  687772 logs.go:282] 0 containers: []
	W1223 00:02:31.257995  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:31.258056  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:31.279556  687772 logs.go:282] 0 containers: []
	W1223 00:02:31.279581  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:31.279607  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:31.279624  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:31.336644  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:31.329447    6086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:31.329953    6086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:31.331523    6086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:31.332012    6086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:31.333520    6086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:31.329447    6086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:31.329953    6086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:31.331523    6086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:31.332012    6086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:31.333520    6086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:31.336664  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:31.336675  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:31.355102  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:31.355129  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:31.384063  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:31.384096  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:31.429299  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:31.429337  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:33.951226  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:33.962558  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:33.981280  687772 logs.go:282] 0 containers: []
	W1223 00:02:33.981301  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:33.981353  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:34.000326  687772 logs.go:282] 0 containers: []
	W1223 00:02:34.000351  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:34.000417  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:34.020043  687772 logs.go:282] 0 containers: []
	W1223 00:02:34.020069  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:34.020114  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:34.042279  687772 logs.go:282] 0 containers: []
	W1223 00:02:34.042304  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:34.042363  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:34.060550  687772 logs.go:282] 0 containers: []
	W1223 00:02:34.060571  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:34.060631  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:34.078917  687772 logs.go:282] 0 containers: []
	W1223 00:02:34.078939  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:34.078986  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:34.098151  687772 logs.go:282] 0 containers: []
	W1223 00:02:34.098177  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:34.098224  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:34.117100  687772 logs.go:282] 0 containers: []
	W1223 00:02:34.117124  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:34.117137  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:34.117153  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:34.138330  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:34.138358  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:34.193562  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:34.186453    6246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:34.187024    6246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:34.188567    6246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:34.188984    6246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:34.190534    6246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:34.186453    6246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:34.187024    6246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:34.188567    6246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:34.188984    6246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:34.190534    6246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:34.193588  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:34.193615  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:34.212264  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:34.212288  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:34.240368  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:34.240399  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:36.793206  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:36.804783  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:36.823535  687772 logs.go:282] 0 containers: []
	W1223 00:02:36.823556  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:36.823618  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:36.841856  687772 logs.go:282] 0 containers: []
	W1223 00:02:36.841879  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:36.841933  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:36.860292  687772 logs.go:282] 0 containers: []
	W1223 00:02:36.860319  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:36.860360  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:36.878691  687772 logs.go:282] 0 containers: []
	W1223 00:02:36.878719  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:36.878773  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:36.897448  687772 logs.go:282] 0 containers: []
	W1223 00:02:36.897472  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:36.897519  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:36.916562  687772 logs.go:282] 0 containers: []
	W1223 00:02:36.916585  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:36.916654  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:36.934784  687772 logs.go:282] 0 containers: []
	W1223 00:02:36.934807  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:36.934865  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:36.953285  687772 logs.go:282] 0 containers: []
	W1223 00:02:36.953305  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:36.953317  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:36.953328  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:37.000978  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:37.001008  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:37.021185  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:37.021217  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:37.081314  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:37.074072    6419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:37.074651    6419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:37.076251    6419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:37.076693    6419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:37.078295    6419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:37.074072    6419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:37.074651    6419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:37.076251    6419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:37.076693    6419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:37.078295    6419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:37.081345  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:37.081366  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:37.100453  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:37.100480  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:39.629693  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:39.641060  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:39.660163  687772 logs.go:282] 0 containers: []
	W1223 00:02:39.660187  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:39.660232  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:39.680357  687772 logs.go:282] 0 containers: []
	W1223 00:02:39.680379  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:39.680422  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:39.699821  687772 logs.go:282] 0 containers: []
	W1223 00:02:39.699853  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:39.699916  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:39.719383  687772 logs.go:282] 0 containers: []
	W1223 00:02:39.719407  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:39.719460  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:39.739699  687772 logs.go:282] 0 containers: []
	W1223 00:02:39.739726  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:39.739800  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:39.758766  687772 logs.go:282] 0 containers: []
	W1223 00:02:39.758791  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:39.758849  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:39.777656  687772 logs.go:282] 0 containers: []
	W1223 00:02:39.777690  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:39.777752  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:39.796962  687772 logs.go:282] 0 containers: []
	W1223 00:02:39.796984  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:39.796995  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:39.797006  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:39.842320  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:39.842347  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:39.862054  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:39.862080  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:39.916930  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:39.910012    6584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:39.910586    6584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:39.912148    6584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:39.912544    6584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:39.913971    6584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:39.910012    6584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:39.910586    6584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:39.912148    6584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:39.912544    6584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:39.913971    6584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:39.916953  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:39.916970  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:39.935277  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:39.935306  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:40.946301  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:02:41.000005  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1223 00:02:41.000109  687772 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1223 00:02:41.001884  687772 out.go:179] * Enabled addons: 
	I1223 00:02:41.002846  687772 addons.go:530] duration metric: took 1m58.614813363s for enable addons: enabled=[]
	I1223 00:02:42.463498  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:42.474861  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:42.493733  687772 logs.go:282] 0 containers: []
	W1223 00:02:42.493756  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:42.493806  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:42.513344  687772 logs.go:282] 0 containers: []
	W1223 00:02:42.513376  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:42.513436  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:42.537617  687772 logs.go:282] 0 containers: []
	W1223 00:02:42.537647  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:42.537701  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:42.557673  687772 logs.go:282] 0 containers: []
	W1223 00:02:42.557698  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:42.557746  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:42.576567  687772 logs.go:282] 0 containers: []
	W1223 00:02:42.576604  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:42.576669  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:42.595813  687772 logs.go:282] 0 containers: []
	W1223 00:02:42.595836  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:42.595890  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:42.615074  687772 logs.go:282] 0 containers: []
	W1223 00:02:42.615101  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:42.615154  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:42.634655  687772 logs.go:282] 0 containers: []
	W1223 00:02:42.634685  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:42.634702  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:42.634719  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:42.654826  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:42.654852  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:42.710552  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:42.703396    6762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:42.703921    6762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:42.705447    6762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:42.705896    6762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:42.707365    6762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:42.703396    6762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:42.703921    6762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:42.705447    6762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:42.705896    6762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:42.707365    6762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:42.710573  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:42.710585  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:42.729412  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:42.729439  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:42.758163  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:42.758187  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:45.306682  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:45.318226  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:45.337265  687772 logs.go:282] 0 containers: []
	W1223 00:02:45.337287  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:45.337343  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:45.355924  687772 logs.go:282] 0 containers: []
	W1223 00:02:45.355945  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:45.355990  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:45.374282  687772 logs.go:282] 0 containers: []
	W1223 00:02:45.374303  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:45.374348  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:45.394500  687772 logs.go:282] 0 containers: []
	W1223 00:02:45.394533  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:45.394584  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:45.412466  687772 logs.go:282] 0 containers: []
	W1223 00:02:45.412489  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:45.412538  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:45.431148  687772 logs.go:282] 0 containers: []
	W1223 00:02:45.431185  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:45.431234  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:45.450281  687772 logs.go:282] 0 containers: []
	W1223 00:02:45.450303  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:45.450352  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:45.468758  687772 logs.go:282] 0 containers: []
	W1223 00:02:45.468787  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:45.468804  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:45.468818  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:45.520708  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:45.520742  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:45.542983  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:45.543013  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:45.598778  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:45.591672    6934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:45.592201    6934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:45.593887    6934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:45.594424    6934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:45.595931    6934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:45.591672    6934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:45.592201    6934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:45.593887    6934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:45.594424    6934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:45.595931    6934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:45.598798  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:45.598812  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:45.617903  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:45.617931  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:48.156370  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:48.167842  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:48.187202  687772 logs.go:282] 0 containers: []
	W1223 00:02:48.187224  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:48.187268  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:48.206448  687772 logs.go:282] 0 containers: []
	W1223 00:02:48.206471  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:48.206516  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:48.225302  687772 logs.go:282] 0 containers: []
	W1223 00:02:48.225322  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:48.225373  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:48.244155  687772 logs.go:282] 0 containers: []
	W1223 00:02:48.244185  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:48.244245  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:48.264312  687772 logs.go:282] 0 containers: []
	W1223 00:02:48.264350  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:48.264418  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:48.284233  687772 logs.go:282] 0 containers: []
	W1223 00:02:48.284260  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:48.284317  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:48.303899  687772 logs.go:282] 0 containers: []
	W1223 00:02:48.303924  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:48.303973  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:48.324302  687772 logs.go:282] 0 containers: []
	W1223 00:02:48.324335  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:48.324350  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:48.324366  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:48.345435  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:48.345463  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:48.402949  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:48.395302    7094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:48.395834    7094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:48.397506    7094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:48.398024    7094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:48.399634    7094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:48.395302    7094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:48.395834    7094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:48.397506    7094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:48.398024    7094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:48.399634    7094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:48.402972  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:48.402984  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:48.423927  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:48.423954  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:48.452771  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:48.452799  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:51.001239  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:51.013175  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:51.032822  687772 logs.go:282] 0 containers: []
	W1223 00:02:51.032846  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:51.032898  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:51.051652  687772 logs.go:282] 0 containers: []
	W1223 00:02:51.051682  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:51.051724  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:51.070373  687772 logs.go:282] 0 containers: []
	W1223 00:02:51.070395  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:51.070448  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:51.088655  687772 logs.go:282] 0 containers: []
	W1223 00:02:51.088676  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:51.088732  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:51.108004  687772 logs.go:282] 0 containers: []
	W1223 00:02:51.108025  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:51.108078  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:51.126636  687772 logs.go:282] 0 containers: []
	W1223 00:02:51.126662  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:51.126728  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:51.145355  687772 logs.go:282] 0 containers: []
	W1223 00:02:51.145385  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:51.145451  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:51.164355  687772 logs.go:282] 0 containers: []
	W1223 00:02:51.164384  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:51.164396  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:51.164409  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:51.191698  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:51.191724  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:51.238383  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:51.238411  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:51.260545  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:51.260580  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:51.318147  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:51.310651    7281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:51.311192    7281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:51.312772    7281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:51.313264    7281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:51.314877    7281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:51.310651    7281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:51.311192    7281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:51.312772    7281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:51.313264    7281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:51.314877    7281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:51.318168  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:51.318182  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:53.838848  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:53.850007  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:53.868584  687772 logs.go:282] 0 containers: []
	W1223 00:02:53.868622  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:53.868663  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:53.887617  687772 logs.go:282] 0 containers: []
	W1223 00:02:53.887640  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:53.887687  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:53.906384  687772 logs.go:282] 0 containers: []
	W1223 00:02:53.906409  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:53.906453  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:53.924912  687772 logs.go:282] 0 containers: []
	W1223 00:02:53.924938  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:53.924988  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:53.943400  687772 logs.go:282] 0 containers: []
	W1223 00:02:53.943425  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:53.943477  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:53.961941  687772 logs.go:282] 0 containers: []
	W1223 00:02:53.961969  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:53.962024  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:53.980915  687772 logs.go:282] 0 containers: []
	W1223 00:02:53.980941  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:53.980987  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:53.998798  687772 logs.go:282] 0 containers: []
	W1223 00:02:53.998817  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:53.998827  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:53.998839  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:54.017064  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:54.017089  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:54.045091  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:54.045114  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:54.090278  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:54.090307  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:54.111890  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:54.111920  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:54.166797  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:54.159777    7452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:54.160366    7452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:54.161942    7452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:54.162343    7452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:54.163852    7452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:54.159777    7452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:54.160366    7452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:54.161942    7452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:54.162343    7452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:54.163852    7452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:56.668571  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:56.680147  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:56.699018  687772 logs.go:282] 0 containers: []
	W1223 00:02:56.699042  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:56.699093  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:56.716996  687772 logs.go:282] 0 containers: []
	W1223 00:02:56.717019  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:56.717068  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:56.735529  687772 logs.go:282] 0 containers: []
	W1223 00:02:56.735565  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:56.735644  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:56.756677  687772 logs.go:282] 0 containers: []
	W1223 00:02:56.756701  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:56.756757  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:56.777819  687772 logs.go:282] 0 containers: []
	W1223 00:02:56.777850  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:56.777905  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:56.799967  687772 logs.go:282] 0 containers: []
	W1223 00:02:56.799997  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:56.800054  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:56.818811  687772 logs.go:282] 0 containers: []
	W1223 00:02:56.818836  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:56.818881  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:56.837426  687772 logs.go:282] 0 containers: []
	W1223 00:02:56.837461  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:56.837473  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:56.837487  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:56.893850  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:56.886682    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:56.887224    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:56.888853    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:56.889345    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:56.890925    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:56.886682    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:56.887224    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:56.888853    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:56.889345    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:56.890925    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:56.893879  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:56.893894  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:56.912125  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:56.912151  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:56.939250  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:56.939279  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:56.986566  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:56.986599  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:59.506330  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:59.518294  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:59.540502  687772 logs.go:282] 0 containers: []
	W1223 00:02:59.540529  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:59.540586  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:59.559288  687772 logs.go:282] 0 containers: []
	W1223 00:02:59.559322  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:59.559372  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:59.577919  687772 logs.go:282] 0 containers: []
	W1223 00:02:59.577945  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:59.578002  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:59.596632  687772 logs.go:282] 0 containers: []
	W1223 00:02:59.596655  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:59.596705  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:59.614750  687772 logs.go:282] 0 containers: []
	W1223 00:02:59.614775  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:59.614826  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:59.632989  687772 logs.go:282] 0 containers: []
	W1223 00:02:59.633007  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:59.633057  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:59.650953  687772 logs.go:282] 0 containers: []
	W1223 00:02:59.650972  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:59.651020  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:59.669171  687772 logs.go:282] 0 containers: []
	W1223 00:02:59.669190  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:59.669202  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:59.669214  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:59.713997  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:59.714026  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:59.733682  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:59.733709  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:59.801000  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:59.792717    7761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:59.793343    7761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:59.796120    7761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:59.796686    7761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:59.797955    7761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:59.792717    7761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:59.793343    7761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:59.796120    7761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:59.796686    7761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:59.797955    7761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:59.801018  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:59.801029  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:59.819988  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:59.820018  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:02.350019  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:02.361484  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:02.380765  687772 logs.go:282] 0 containers: []
	W1223 00:03:02.380793  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:02.380841  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:02.398822  687772 logs.go:282] 0 containers: []
	W1223 00:03:02.398847  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:02.398892  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:02.416468  687772 logs.go:282] 0 containers: []
	W1223 00:03:02.416488  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:02.416530  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:02.435155  687772 logs.go:282] 0 containers: []
	W1223 00:03:02.435182  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:02.435237  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:02.453935  687772 logs.go:282] 0 containers: []
	W1223 00:03:02.453961  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:02.454012  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:02.472347  687772 logs.go:282] 0 containers: []
	W1223 00:03:02.472376  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:02.472445  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:02.490480  687772 logs.go:282] 0 containers: []
	W1223 00:03:02.490505  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:02.490562  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:02.510458  687772 logs.go:282] 0 containers: []
	W1223 00:03:02.510485  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:02.510498  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:02.510509  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:02.541744  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:02.541769  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:02.587578  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:02.587619  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:02.607135  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:02.607161  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:02.663082  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:02.655098    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:02.655704    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:02.657931    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:02.658437    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:02.660015    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:02.655098    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:02.655704    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:02.657931    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:02.658437    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:02.660015    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:02.663104  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:02.663117  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:05.182740  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:05.194033  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:05.212783  687772 logs.go:282] 0 containers: []
	W1223 00:03:05.212809  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:05.212868  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:05.230615  687772 logs.go:282] 0 containers: []
	W1223 00:03:05.230643  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:05.230687  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:05.249068  687772 logs.go:282] 0 containers: []
	W1223 00:03:05.249091  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:05.249140  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:05.268884  687772 logs.go:282] 0 containers: []
	W1223 00:03:05.268913  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:05.268965  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:05.288077  687772 logs.go:282] 0 containers: []
	W1223 00:03:05.288103  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:05.288159  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:05.306886  687772 logs.go:282] 0 containers: []
	W1223 00:03:05.306916  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:05.306970  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:05.325552  687772 logs.go:282] 0 containers: []
	W1223 00:03:05.325579  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:05.325644  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:05.344222  687772 logs.go:282] 0 containers: []
	W1223 00:03:05.344252  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:05.344264  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:05.344276  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:05.389222  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:05.389252  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:05.409357  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:05.409384  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:05.466244  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:05.459201    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:05.459757    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:05.461346    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:05.461781    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:05.463277    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:05.459201    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:05.459757    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:05.461346    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:05.461781    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:05.463277    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:05.466269  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:05.466285  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:05.484803  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:05.484830  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:08.013719  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:08.026534  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:08.046545  687772 logs.go:282] 0 containers: []
	W1223 00:03:08.046567  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:08.046633  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:08.065353  687772 logs.go:282] 0 containers: []
	W1223 00:03:08.065375  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:08.065423  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:08.084081  687772 logs.go:282] 0 containers: []
	W1223 00:03:08.084109  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:08.084156  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:08.102488  687772 logs.go:282] 0 containers: []
	W1223 00:03:08.102514  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:08.102570  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:08.121317  687772 logs.go:282] 0 containers: []
	W1223 00:03:08.121347  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:08.121391  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:08.139209  687772 logs.go:282] 0 containers: []
	W1223 00:03:08.139232  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:08.139282  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:08.157445  687772 logs.go:282] 0 containers: []
	W1223 00:03:08.157465  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:08.157510  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:08.177073  687772 logs.go:282] 0 containers: []
	W1223 00:03:08.177101  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:08.177115  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:08.177131  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:08.195188  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:08.195222  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:08.223256  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:08.223282  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:08.270668  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:08.270696  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:08.290331  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:08.290355  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:08.344801  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:08.337927    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:08.338445    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:08.340039    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:08.340476    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:08.341971    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:08.337927    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:08.338445    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:08.340039    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:08.340476    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:08.341971    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:10.846497  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:10.857798  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:10.876797  687772 logs.go:282] 0 containers: []
	W1223 00:03:10.876818  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:10.876863  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:10.895838  687772 logs.go:282] 0 containers: []
	W1223 00:03:10.895862  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:10.895907  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:10.913971  687772 logs.go:282] 0 containers: []
	W1223 00:03:10.913996  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:10.914038  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:10.932422  687772 logs.go:282] 0 containers: []
	W1223 00:03:10.932449  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:10.932501  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:10.951013  687772 logs.go:282] 0 containers: []
	W1223 00:03:10.951034  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:10.951076  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:10.969170  687772 logs.go:282] 0 containers: []
	W1223 00:03:10.969198  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:10.969242  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:10.988274  687772 logs.go:282] 0 containers: []
	W1223 00:03:10.988332  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:10.988382  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:11.006849  687772 logs.go:282] 0 containers: []
	W1223 00:03:11.006875  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:11.006889  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:11.006906  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:11.059569  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:11.059619  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:11.079808  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:11.079835  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:11.134768  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:11.127709    8435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:11.128312    8435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:11.129883    8435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:11.130324    8435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:11.131860    8435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:11.127709    8435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:11.128312    8435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:11.129883    8435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:11.130324    8435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:11.131860    8435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:11.134794  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:11.134817  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:11.153181  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:11.153207  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:13.681510  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:13.692957  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:13.711987  687772 logs.go:282] 0 containers: []
	W1223 00:03:13.712017  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:13.712069  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:13.730999  687772 logs.go:282] 0 containers: []
	W1223 00:03:13.731026  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:13.731083  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:13.753677  687772 logs.go:282] 0 containers: []
	W1223 00:03:13.753709  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:13.753769  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:13.779299  687772 logs.go:282] 0 containers: []
	W1223 00:03:13.779328  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:13.779389  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:13.800195  687772 logs.go:282] 0 containers: []
	W1223 00:03:13.800223  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:13.800269  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:13.818836  687772 logs.go:282] 0 containers: []
	W1223 00:03:13.818861  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:13.818905  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:13.837265  687772 logs.go:282] 0 containers: []
	W1223 00:03:13.837293  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:13.837349  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:13.855911  687772 logs.go:282] 0 containers: []
	W1223 00:03:13.855934  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:13.855944  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:13.855963  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:13.877413  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:13.877442  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:13.932902  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:13.925709    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:13.926137    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:13.927723    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:13.928122    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:13.929665    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:13.925709    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:13.926137    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:13.927723    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:13.928122    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:13.929665    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:13.932922  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:13.932935  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:13.951430  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:13.951455  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:13.979434  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:13.979463  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:16.528395  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:16.539658  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:16.558721  687772 logs.go:282] 0 containers: []
	W1223 00:03:16.558746  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:16.558802  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:16.577097  687772 logs.go:282] 0 containers: []
	W1223 00:03:16.577122  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:16.577169  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:16.594944  687772 logs.go:282] 0 containers: []
	W1223 00:03:16.594973  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:16.595021  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:16.612956  687772 logs.go:282] 0 containers: []
	W1223 00:03:16.612982  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:16.613028  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:16.631601  687772 logs.go:282] 0 containers: []
	W1223 00:03:16.631626  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:16.631689  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:16.650054  687772 logs.go:282] 0 containers: []
	W1223 00:03:16.650077  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:16.650125  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:16.668847  687772 logs.go:282] 0 containers: []
	W1223 00:03:16.668868  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:16.668912  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:16.686862  687772 logs.go:282] 0 containers: []
	W1223 00:03:16.686892  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:16.686906  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:16.686923  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:16.743145  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:16.736059    8755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:16.736637    8755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:16.738192    8755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:16.738624    8755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:16.740098    8755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:16.736059    8755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:16.736637    8755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:16.738192    8755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:16.738624    8755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:16.740098    8755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:16.743166  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:16.743178  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:16.762565  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:16.762607  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:16.794528  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:16.794556  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:16.840343  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:16.840372  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:19.362509  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:19.374211  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:19.393192  687772 logs.go:282] 0 containers: []
	W1223 00:03:19.393216  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:19.393268  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:19.412437  687772 logs.go:282] 0 containers: []
	W1223 00:03:19.412465  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:19.412523  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:19.432373  687772 logs.go:282] 0 containers: []
	W1223 00:03:19.432401  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:19.432460  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:19.452125  687772 logs.go:282] 0 containers: []
	W1223 00:03:19.452159  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:19.452217  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:19.471301  687772 logs.go:282] 0 containers: []
	W1223 00:03:19.471328  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:19.471374  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:19.490544  687772 logs.go:282] 0 containers: []
	W1223 00:03:19.490571  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:19.490643  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:19.510487  687772 logs.go:282] 0 containers: []
	W1223 00:03:19.510508  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:19.510559  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:19.529060  687772 logs.go:282] 0 containers: []
	W1223 00:03:19.529084  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:19.529097  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:19.529112  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:19.574443  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:19.574473  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:19.594488  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:19.594517  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:19.649890  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:19.642446    8930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:19.643046    8930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:19.644619    8930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:19.645108    8930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:19.646648    8930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:19.642446    8930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:19.643046    8930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:19.644619    8930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:19.645108    8930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:19.646648    8930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:19.649910  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:19.649923  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:19.668626  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:19.668651  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:22.198480  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:22.210881  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:22.230438  687772 logs.go:282] 0 containers: []
	W1223 00:03:22.230462  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:22.230522  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:22.248861  687772 logs.go:282] 0 containers: []
	W1223 00:03:22.248882  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:22.248922  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:22.268466  687772 logs.go:282] 0 containers: []
	W1223 00:03:22.268499  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:22.268557  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:22.289199  687772 logs.go:282] 0 containers: []
	W1223 00:03:22.289223  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:22.289268  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:22.307380  687772 logs.go:282] 0 containers: []
	W1223 00:03:22.307405  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:22.307470  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:22.324678  687772 logs.go:282] 0 containers: []
	W1223 00:03:22.324704  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:22.324763  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:22.343704  687772 logs.go:282] 0 containers: []
	W1223 00:03:22.343736  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:22.343791  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:22.362087  687772 logs.go:282] 0 containers: []
	W1223 00:03:22.362117  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:22.362137  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:22.362150  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:22.409818  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:22.409877  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:22.430134  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:22.430165  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:22.485643  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:22.478699    9095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:22.479219    9095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:22.480800    9095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:22.481236    9095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:22.482717    9095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:22.478699    9095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:22.479219    9095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:22.480800    9095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:22.481236    9095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:22.482717    9095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:22.485664  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:22.485680  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:22.504121  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:22.504150  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:25.031881  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:25.043513  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:25.063145  687772 logs.go:282] 0 containers: []
	W1223 00:03:25.063167  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:25.063211  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:25.082000  687772 logs.go:282] 0 containers: []
	W1223 00:03:25.082025  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:25.082074  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:25.099962  687772 logs.go:282] 0 containers: []
	W1223 00:03:25.099984  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:25.100038  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:25.118454  687772 logs.go:282] 0 containers: []
	W1223 00:03:25.118479  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:25.118537  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:25.136993  687772 logs.go:282] 0 containers: []
	W1223 00:03:25.137020  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:25.137069  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:25.155902  687772 logs.go:282] 0 containers: []
	W1223 00:03:25.155925  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:25.155974  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:25.175659  687772 logs.go:282] 0 containers: []
	W1223 00:03:25.175683  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:25.175737  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:25.194139  687772 logs.go:282] 0 containers: []
	W1223 00:03:25.194167  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:25.194180  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:25.194193  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:25.240226  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:25.240258  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:25.261339  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:25.261367  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:25.320736  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:25.313498    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:25.314072    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:25.315614    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:25.316005    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:25.317501    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:25.313498    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:25.314072    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:25.315614    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:25.316005    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:25.317501    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:25.320756  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:25.320768  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:25.341035  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:25.341064  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:27.870845  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:27.882071  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:27.901298  687772 logs.go:282] 0 containers: []
	W1223 00:03:27.901323  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:27.901382  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:27.919859  687772 logs.go:282] 0 containers: []
	W1223 00:03:27.919880  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:27.919930  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:27.938496  687772 logs.go:282] 0 containers: []
	W1223 00:03:27.938520  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:27.938563  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:27.956888  687772 logs.go:282] 0 containers: []
	W1223 00:03:27.956916  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:27.956972  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:27.975342  687772 logs.go:282] 0 containers: []
	W1223 00:03:27.975362  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:27.975412  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:27.994015  687772 logs.go:282] 0 containers: []
	W1223 00:03:27.994038  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:27.994082  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:28.013037  687772 logs.go:282] 0 containers: []
	W1223 00:03:28.013065  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:28.013125  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:28.033210  687772 logs.go:282] 0 containers: []
	W1223 00:03:28.033234  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:28.033247  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:28.033262  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:28.078861  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:28.078892  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:28.098865  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:28.098890  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:28.154165  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:28.147100    9422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:28.147650    9422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:28.149204    9422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:28.149685    9422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:28.151156    9422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:28.147100    9422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:28.147650    9422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:28.149204    9422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:28.149685    9422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:28.151156    9422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:28.154185  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:28.154197  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:28.172425  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:28.172454  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:30.702937  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:30.714537  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:30.735323  687772 logs.go:282] 0 containers: []
	W1223 00:03:30.735346  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:30.735411  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:30.754342  687772 logs.go:282] 0 containers: []
	W1223 00:03:30.754364  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:30.754416  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:30.773486  687772 logs.go:282] 0 containers: []
	W1223 00:03:30.773513  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:30.773570  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:30.792473  687772 logs.go:282] 0 containers: []
	W1223 00:03:30.792498  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:30.792554  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:30.810955  687772 logs.go:282] 0 containers: []
	W1223 00:03:30.810981  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:30.811028  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:30.829795  687772 logs.go:282] 0 containers: []
	W1223 00:03:30.829816  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:30.829864  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:30.848939  687772 logs.go:282] 0 containers: []
	W1223 00:03:30.848959  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:30.849000  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:30.867397  687772 logs.go:282] 0 containers: []
	W1223 00:03:30.867423  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:30.867435  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:30.867452  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:30.887088  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:30.887116  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:30.942084  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:30.934885    9589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:30.935453    9589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:30.937022    9589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:30.937466    9589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:30.938978    9589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:30.934885    9589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:30.935453    9589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:30.937022    9589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:30.937466    9589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:30.938978    9589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:30.942116  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:30.942130  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:30.960703  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:30.960730  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:30.988334  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:30.988359  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:33.539710  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:33.551147  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:33.569876  687772 logs.go:282] 0 containers: []
	W1223 00:03:33.569899  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:33.569943  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:33.588678  687772 logs.go:282] 0 containers: []
	W1223 00:03:33.588710  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:33.588766  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:33.607229  687772 logs.go:282] 0 containers: []
	W1223 00:03:33.607251  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:33.607302  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:33.625442  687772 logs.go:282] 0 containers: []
	W1223 00:03:33.625466  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:33.625527  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:33.644308  687772 logs.go:282] 0 containers: []
	W1223 00:03:33.644340  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:33.644396  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:33.662684  687772 logs.go:282] 0 containers: []
	W1223 00:03:33.662717  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:33.662786  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:33.681135  687772 logs.go:282] 0 containers: []
	W1223 00:03:33.681161  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:33.681209  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:33.700016  687772 logs.go:282] 0 containers: []
	W1223 00:03:33.700042  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:33.700057  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:33.700070  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:33.718957  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:33.718985  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:33.747390  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:33.747417  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:33.793693  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:33.793722  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:33.815051  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:33.815076  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:33.869709  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:33.862833    9773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:33.863339    9773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:33.864945    9773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:33.865374    9773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:33.866900    9773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:33.862833    9773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:33.863339    9773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:33.864945    9773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:33.865374    9773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:33.866900    9773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:36.371365  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:36.383229  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:36.403744  687772 logs.go:282] 0 containers: []
	W1223 00:03:36.403771  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:36.403818  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:36.422087  687772 logs.go:282] 0 containers: []
	W1223 00:03:36.422109  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:36.422163  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:36.440967  687772 logs.go:282] 0 containers: []
	W1223 00:03:36.440989  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:36.441046  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:36.459110  687772 logs.go:282] 0 containers: []
	W1223 00:03:36.459137  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:36.459184  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:36.477754  687772 logs.go:282] 0 containers: []
	W1223 00:03:36.477781  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:36.477838  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:36.496775  687772 logs.go:282] 0 containers: []
	W1223 00:03:36.496803  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:36.496857  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:36.516542  687772 logs.go:282] 0 containers: []
	W1223 00:03:36.516577  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:36.516652  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:36.537692  687772 logs.go:282] 0 containers: []
	W1223 00:03:36.537720  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:36.537731  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:36.537744  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:36.585346  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:36.585376  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:36.605519  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:36.605545  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:36.660230  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:36.653151    9925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:36.653663    9925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:36.655147    9925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:36.655634    9925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:36.657120    9925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:36.653151    9925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:36.653663    9925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:36.655147    9925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:36.655634    9925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:36.657120    9925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:36.660253  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:36.660269  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:36.678368  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:36.678395  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:39.206672  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:39.218123  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:39.236299  687772 logs.go:282] 0 containers: []
	W1223 00:03:39.236322  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:39.236384  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:39.256168  687772 logs.go:282] 0 containers: []
	W1223 00:03:39.256194  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:39.256256  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:39.278907  687772 logs.go:282] 0 containers: []
	W1223 00:03:39.278934  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:39.278987  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:39.299685  687772 logs.go:282] 0 containers: []
	W1223 00:03:39.299712  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:39.299771  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:39.319824  687772 logs.go:282] 0 containers: []
	W1223 00:03:39.319847  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:39.319890  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:39.339314  687772 logs.go:282] 0 containers: []
	W1223 00:03:39.339340  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:39.339388  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:39.357097  687772 logs.go:282] 0 containers: []
	W1223 00:03:39.357122  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:39.357178  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:39.375484  687772 logs.go:282] 0 containers: []
	W1223 00:03:39.375506  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:39.375518  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:39.375528  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:39.422143  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:39.422171  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:39.442163  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:39.442190  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:39.499251  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:39.491681   10081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:39.492132   10081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:39.493822   10081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:39.494268   10081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:39.495815   10081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:39.491681   10081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:39.492132   10081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:39.493822   10081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:39.494268   10081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:39.495815   10081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:39.499300  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:39.499313  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:39.520555  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:39.520585  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:42.050334  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:42.062329  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:42.081392  687772 logs.go:282] 0 containers: []
	W1223 00:03:42.081414  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:42.081466  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:42.100032  687772 logs.go:282] 0 containers: []
	W1223 00:03:42.100060  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:42.100108  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:42.118667  687772 logs.go:282] 0 containers: []
	W1223 00:03:42.118701  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:42.118755  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:42.137260  687772 logs.go:282] 0 containers: []
	W1223 00:03:42.137280  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:42.137324  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:42.156202  687772 logs.go:282] 0 containers: []
	W1223 00:03:42.156223  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:42.156268  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:42.173781  687772 logs.go:282] 0 containers: []
	W1223 00:03:42.173805  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:42.173849  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:42.191802  687772 logs.go:282] 0 containers: []
	W1223 00:03:42.191823  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:42.191865  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:42.210403  687772 logs.go:282] 0 containers: []
	W1223 00:03:42.210428  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:42.210439  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:42.210451  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:42.257288  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:42.257324  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:42.279921  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:42.279950  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:42.335965  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:42.328933   10249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:42.329474   10249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:42.331040   10249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:42.331482   10249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:42.333076   10249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:42.328933   10249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:42.329474   10249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:42.331040   10249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:42.331482   10249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:42.333076   10249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:42.335989  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:42.336007  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:42.354691  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:42.354717  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:44.883238  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:44.894443  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:44.913117  687772 logs.go:282] 0 containers: []
	W1223 00:03:44.913141  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:44.913198  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:44.931401  687772 logs.go:282] 0 containers: []
	W1223 00:03:44.931426  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:44.931481  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:44.950195  687772 logs.go:282] 0 containers: []
	W1223 00:03:44.950223  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:44.950276  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:44.968485  687772 logs.go:282] 0 containers: []
	W1223 00:03:44.968511  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:44.968566  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:44.987148  687772 logs.go:282] 0 containers: []
	W1223 00:03:44.987171  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:44.987233  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:45.005624  687772 logs.go:282] 0 containers: []
	W1223 00:03:45.005646  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:45.005693  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:45.023699  687772 logs.go:282] 0 containers: []
	W1223 00:03:45.023724  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:45.023791  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:45.042874  687772 logs.go:282] 0 containers: []
	W1223 00:03:45.042892  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:45.042903  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:45.042913  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:45.091063  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:45.091090  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:45.111078  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:45.111104  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:45.165637  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:45.158773   10418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:45.159306   10418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:45.160855   10418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:45.161311   10418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:45.162829   10418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:45.158773   10418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:45.159306   10418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:45.160855   10418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:45.161311   10418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:45.162829   10418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:45.165664  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:45.165680  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:45.183805  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:45.183831  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:47.712691  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:47.724393  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:47.743118  687772 logs.go:282] 0 containers: []
	W1223 00:03:47.743145  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:47.743192  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:47.764020  687772 logs.go:282] 0 containers: []
	W1223 00:03:47.764047  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:47.764100  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:47.784950  687772 logs.go:282] 0 containers: []
	W1223 00:03:47.784979  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:47.785031  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:47.805130  687772 logs.go:282] 0 containers: []
	W1223 00:03:47.805153  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:47.805202  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:47.824818  687772 logs.go:282] 0 containers: []
	W1223 00:03:47.824840  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:47.824881  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:47.842122  687772 logs.go:282] 0 containers: []
	W1223 00:03:47.842142  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:47.842182  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:47.860107  687772 logs.go:282] 0 containers: []
	W1223 00:03:47.860126  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:47.860169  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:47.877957  687772 logs.go:282] 0 containers: []
	W1223 00:03:47.877981  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:47.877991  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:47.878003  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:47.913554  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:47.913583  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:47.959272  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:47.959301  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:47.979197  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:47.979224  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:48.034846  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:48.027499   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:48.028033   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:48.029566   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:48.030004   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:48.031523   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:48.027499   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:48.028033   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:48.029566   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:48.030004   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:48.031523   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:48.034864  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:48.034876  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:50.554653  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:50.565766  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:50.584506  687772 logs.go:282] 0 containers: []
	W1223 00:03:50.584527  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:50.584568  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:50.603087  687772 logs.go:282] 0 containers: []
	W1223 00:03:50.603112  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:50.603159  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:50.621694  687772 logs.go:282] 0 containers: []
	W1223 00:03:50.621718  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:50.621758  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:50.640855  687772 logs.go:282] 0 containers: []
	W1223 00:03:50.640882  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:50.640950  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:50.658573  687772 logs.go:282] 0 containers: []
	W1223 00:03:50.658615  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:50.658659  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:50.676703  687772 logs.go:282] 0 containers: []
	W1223 00:03:50.676725  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:50.676792  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:50.694997  687772 logs.go:282] 0 containers: []
	W1223 00:03:50.695020  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:50.695084  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:50.711361  687772 logs.go:282] 0 containers: []
	W1223 00:03:50.711382  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:50.711393  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:50.711405  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:50.739475  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:50.739500  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:50.789788  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:50.789828  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:50.810067  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:50.810096  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:50.864855  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:50.857771   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:50.858239   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:50.859923   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:50.860349   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:50.861907   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:50.857771   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:50.858239   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:50.859923   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:50.860349   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:50.861907   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:50.864881  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:50.864896  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:53.383457  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:53.394757  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:53.414248  687772 logs.go:282] 0 containers: []
	W1223 00:03:53.414277  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:53.414341  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:53.432950  687772 logs.go:282] 0 containers: []
	W1223 00:03:53.432970  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:53.433020  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:53.452058  687772 logs.go:282] 0 containers: []
	W1223 00:03:53.452081  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:53.452143  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:53.470670  687772 logs.go:282] 0 containers: []
	W1223 00:03:53.470698  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:53.470751  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:53.489416  687772 logs.go:282] 0 containers: []
	W1223 00:03:53.489443  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:53.489486  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:53.508963  687772 logs.go:282] 0 containers: []
	W1223 00:03:53.508995  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:53.509057  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:53.530683  687772 logs.go:282] 0 containers: []
	W1223 00:03:53.530710  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:53.530770  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:53.551545  687772 logs.go:282] 0 containers: []
	W1223 00:03:53.551577  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:53.551610  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:53.551627  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:53.570296  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:53.570324  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:53.598123  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:53.598154  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:53.646248  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:53.646280  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:53.666819  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:53.666844  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:53.722068  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:53.715109   10928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:53.715646   10928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:53.717149   10928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:53.717536   10928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:53.719006   10928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:53.715109   10928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:53.715646   10928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:53.717149   10928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:53.717536   10928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:53.719006   10928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:56.223706  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:56.235187  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:56.255491  687772 logs.go:282] 0 containers: []
	W1223 00:03:56.255511  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:56.255551  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:56.274455  687772 logs.go:282] 0 containers: []
	W1223 00:03:56.274479  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:56.274519  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:56.293621  687772 logs.go:282] 0 containers: []
	W1223 00:03:56.293648  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:56.293702  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:56.312485  687772 logs.go:282] 0 containers: []
	W1223 00:03:56.312511  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:56.312558  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:56.331239  687772 logs.go:282] 0 containers: []
	W1223 00:03:56.331266  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:56.331320  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:56.349793  687772 logs.go:282] 0 containers: []
	W1223 00:03:56.349813  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:56.349856  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:56.368378  687772 logs.go:282] 0 containers: []
	W1223 00:03:56.368397  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:56.368446  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:56.386706  687772 logs.go:282] 0 containers: []
	W1223 00:03:56.386730  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:56.386744  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:56.386759  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:56.435036  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:56.435067  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:56.456766  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:56.456793  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:56.515022  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:56.506534   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:56.507203   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:56.508885   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:56.509323   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:56.510920   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:56.506534   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:56.507203   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:56.508885   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:56.509323   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:56.510920   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:56.515044  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:56.515056  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:56.537382  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:56.537424  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:59.067413  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:59.078926  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:59.098458  687772 logs.go:282] 0 containers: []
	W1223 00:03:59.098490  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:59.098543  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:59.119074  687772 logs.go:282] 0 containers: []
	W1223 00:03:59.119100  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:59.119146  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:59.138014  687772 logs.go:282] 0 containers: []
	W1223 00:03:59.138036  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:59.138082  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:59.157367  687772 logs.go:282] 0 containers: []
	W1223 00:03:59.157390  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:59.157433  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:59.175923  687772 logs.go:282] 0 containers: []
	W1223 00:03:59.175950  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:59.176008  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:59.194211  687772 logs.go:282] 0 containers: []
	W1223 00:03:59.194243  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:59.194295  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:59.212980  687772 logs.go:282] 0 containers: []
	W1223 00:03:59.213004  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:59.213050  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:59.231233  687772 logs.go:282] 0 containers: []
	W1223 00:03:59.231255  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:59.231266  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:59.231277  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:59.260354  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:59.260377  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:59.307751  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:59.307784  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:59.327756  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:59.327782  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:59.382873  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:59.375811   11264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:59.376331   11264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:59.377895   11264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:59.378317   11264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:59.379902   11264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:59.375811   11264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:59.376331   11264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:59.377895   11264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:59.378317   11264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:59.379902   11264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:59.382895  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:59.382908  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:01.903304  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:01.914514  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:01.933300  687772 logs.go:282] 0 containers: []
	W1223 00:04:01.933328  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:01.933388  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:01.952153  687772 logs.go:282] 0 containers: []
	W1223 00:04:01.952181  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:01.952225  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:01.970903  687772 logs.go:282] 0 containers: []
	W1223 00:04:01.970933  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:01.970987  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:01.989493  687772 logs.go:282] 0 containers: []
	W1223 00:04:01.989513  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:01.989567  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:02.009114  687772 logs.go:282] 0 containers: []
	W1223 00:04:02.009141  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:02.009198  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:02.030277  687772 logs.go:282] 0 containers: []
	W1223 00:04:02.030310  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:02.030365  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:02.050466  687772 logs.go:282] 0 containers: []
	W1223 00:04:02.050492  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:02.050551  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:02.069917  687772 logs.go:282] 0 containers: []
	W1223 00:04:02.069941  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:02.069956  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:02.069970  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:02.115721  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:02.115750  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:02.135348  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:02.135373  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:02.190691  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:02.183688   11415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:02.184205   11415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:02.185799   11415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:02.186209   11415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:02.187682   11415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:02.183688   11415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:02.184205   11415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:02.185799   11415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:02.186209   11415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:02.187682   11415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:02.190712  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:02.190724  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:02.209097  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:02.209122  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:04.737357  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:04.748553  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:04.770341  687772 logs.go:282] 0 containers: []
	W1223 00:04:04.770369  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:04.770424  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:04.791137  687772 logs.go:282] 0 containers: []
	W1223 00:04:04.791165  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:04.791214  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:04.810520  687772 logs.go:282] 0 containers: []
	W1223 00:04:04.810541  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:04.810607  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:04.828972  687772 logs.go:282] 0 containers: []
	W1223 00:04:04.829000  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:04.829055  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:04.849074  687772 logs.go:282] 0 containers: []
	W1223 00:04:04.849096  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:04.849148  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:04.868041  687772 logs.go:282] 0 containers: []
	W1223 00:04:04.868063  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:04.868115  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:04.886481  687772 logs.go:282] 0 containers: []
	W1223 00:04:04.886504  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:04.886567  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:04.905235  687772 logs.go:282] 0 containers: []
	W1223 00:04:04.905262  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:04.905274  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:04.905285  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:04.953851  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:04.953880  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:04.973781  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:04.973806  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:05.031345  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:05.024020   11570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:05.024585   11570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:05.026291   11570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:05.026768   11570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:05.028137   11570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:05.024020   11570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:05.024585   11570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:05.026291   11570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:05.026768   11570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:05.028137   11570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:05.031368  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:05.031383  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:05.050812  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:05.050839  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:07.580204  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:07.592091  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:07.611238  687772 logs.go:282] 0 containers: []
	W1223 00:04:07.611267  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:07.611318  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:07.630713  687772 logs.go:282] 0 containers: []
	W1223 00:04:07.630736  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:07.630786  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:07.649511  687772 logs.go:282] 0 containers: []
	W1223 00:04:07.649541  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:07.649620  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:07.668236  687772 logs.go:282] 0 containers: []
	W1223 00:04:07.668264  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:07.668323  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:07.687077  687772 logs.go:282] 0 containers: []
	W1223 00:04:07.687101  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:07.687158  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:07.705952  687772 logs.go:282] 0 containers: []
	W1223 00:04:07.705982  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:07.706036  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:07.725156  687772 logs.go:282] 0 containers: []
	W1223 00:04:07.725178  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:07.725224  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:07.744024  687772 logs.go:282] 0 containers: []
	W1223 00:04:07.744049  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:07.744063  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:07.744079  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:07.797680  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:07.797721  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:07.819453  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:07.819481  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:07.875026  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:07.867909   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:07.868453   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:07.870037   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:07.870465   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:07.872022   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:07.867909   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:07.868453   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:07.870037   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:07.870465   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:07.872022   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:07.875046  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:07.875059  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:07.893942  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:07.893968  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:10.422234  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:10.433749  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:10.453027  687772 logs.go:282] 0 containers: []
	W1223 00:04:10.453049  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:10.453099  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:10.471766  687772 logs.go:282] 0 containers: []
	W1223 00:04:10.471789  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:10.471840  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:10.489960  687772 logs.go:282] 0 containers: []
	W1223 00:04:10.489981  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:10.490025  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:10.508537  687772 logs.go:282] 0 containers: []
	W1223 00:04:10.508558  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:10.508614  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:10.527336  687772 logs.go:282] 0 containers: []
	W1223 00:04:10.527362  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:10.527418  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:10.545995  687772 logs.go:282] 0 containers: []
	W1223 00:04:10.546019  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:10.546074  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:10.564167  687772 logs.go:282] 0 containers: []
	W1223 00:04:10.564196  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:10.564254  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:10.582919  687772 logs.go:282] 0 containers: []
	W1223 00:04:10.582947  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:10.582961  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:10.582974  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:10.630969  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:10.631004  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:10.651161  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:10.651197  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:10.709000  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:10.701750   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:10.702399   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:10.703967   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:10.704377   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:10.705928   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:10.701750   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:10.702399   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:10.703967   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:10.704377   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:10.705928   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:10.709026  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:10.709041  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:10.728175  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:10.728203  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:13.258812  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:13.271437  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:13.293437  687772 logs.go:282] 0 containers: []
	W1223 00:04:13.293468  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:13.293525  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:13.313483  687772 logs.go:282] 0 containers: []
	W1223 00:04:13.313508  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:13.313568  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:13.333612  687772 logs.go:282] 0 containers: []
	W1223 00:04:13.333643  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:13.333709  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:13.353086  687772 logs.go:282] 0 containers: []
	W1223 00:04:13.353111  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:13.353169  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:13.372208  687772 logs.go:282] 0 containers: []
	W1223 00:04:13.372230  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:13.372275  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:13.391431  687772 logs.go:282] 0 containers: []
	W1223 00:04:13.391457  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:13.391507  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:13.410402  687772 logs.go:282] 0 containers: []
	W1223 00:04:13.410434  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:13.410502  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:13.428653  687772 logs.go:282] 0 containers: []
	W1223 00:04:13.428675  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:13.428687  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:13.428709  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:13.474690  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:13.474729  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:13.495426  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:13.495457  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:13.550790  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:13.543422   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:13.544009   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:13.545544   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:13.546130   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:13.547692   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:13.543422   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:13.544009   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:13.545544   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:13.546130   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:13.547692   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:13.550810  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:13.550822  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:13.569370  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:13.569397  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:16.099133  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:16.110484  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:16.129712  687772 logs.go:282] 0 containers: []
	W1223 00:04:16.129743  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:16.129808  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:16.147785  687772 logs.go:282] 0 containers: []
	W1223 00:04:16.147808  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:16.147854  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:16.167259  687772 logs.go:282] 0 containers: []
	W1223 00:04:16.167284  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:16.167333  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:16.186151  687772 logs.go:282] 0 containers: []
	W1223 00:04:16.186178  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:16.186223  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:16.206074  687772 logs.go:282] 0 containers: []
	W1223 00:04:16.206099  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:16.206154  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:16.225296  687772 logs.go:282] 0 containers: []
	W1223 00:04:16.225319  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:16.225369  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:16.244091  687772 logs.go:282] 0 containers: []
	W1223 00:04:16.244115  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:16.244160  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:16.263620  687772 logs.go:282] 0 containers: []
	W1223 00:04:16.263643  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:16.263655  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:16.263667  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:16.323241  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:16.316239   12235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:16.316726   12235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:16.318256   12235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:16.318676   12235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:16.319891   12235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:16.316239   12235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:16.316726   12235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:16.318256   12235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:16.318676   12235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:16.319891   12235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:16.323265  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:16.323281  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:16.342320  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:16.342346  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:16.371156  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:16.371183  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:16.421158  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:16.421188  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:18.942795  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:18.954257  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:18.974190  687772 logs.go:282] 0 containers: []
	W1223 00:04:18.974217  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:18.974270  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:18.993178  687772 logs.go:282] 0 containers: []
	W1223 00:04:18.993200  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:18.993245  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:19.013377  687772 logs.go:282] 0 containers: []
	W1223 00:04:19.013405  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:19.013465  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:19.034917  687772 logs.go:282] 0 containers: []
	W1223 00:04:19.034941  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:19.034990  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:19.054247  687772 logs.go:282] 0 containers: []
	W1223 00:04:19.054271  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:19.054326  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:19.072206  687772 logs.go:282] 0 containers: []
	W1223 00:04:19.072235  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:19.072297  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:19.091855  687772 logs.go:282] 0 containers: []
	W1223 00:04:19.091882  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:19.091933  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:19.111067  687772 logs.go:282] 0 containers: []
	W1223 00:04:19.111100  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:19.111114  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:19.111127  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:19.161923  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:19.161955  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:19.182679  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:19.182708  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:19.239475  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:19.232458   12393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:19.233037   12393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:19.234582   12393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:19.234997   12393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:19.236569   12393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:19.232458   12393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:19.233037   12393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:19.234582   12393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:19.234997   12393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:19.236569   12393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:19.239503  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:19.239521  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:19.259046  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:19.259075  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:21.799246  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:21.810742  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:21.830826  687772 logs.go:282] 0 containers: []
	W1223 00:04:21.830852  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:21.830896  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:21.849427  687772 logs.go:282] 0 containers: []
	W1223 00:04:21.849455  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:21.849501  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:21.867823  687772 logs.go:282] 0 containers: []
	W1223 00:04:21.867847  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:21.867891  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:21.886431  687772 logs.go:282] 0 containers: []
	W1223 00:04:21.886452  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:21.886508  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:21.905079  687772 logs.go:282] 0 containers: []
	W1223 00:04:21.905103  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:21.905160  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:21.923344  687772 logs.go:282] 0 containers: []
	W1223 00:04:21.923365  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:21.923407  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:21.941945  687772 logs.go:282] 0 containers: []
	W1223 00:04:21.941966  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:21.942012  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:21.959749  687772 logs.go:282] 0 containers: []
	W1223 00:04:21.959773  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:21.959785  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:21.959795  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:21.979750  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:21.979776  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:22.008278  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:22.008301  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:22.059988  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:22.060022  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:22.080174  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:22.080201  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:22.135625  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:22.128550   12578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:22.129064   12578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:22.130551   12578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:22.130965   12578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:22.132436   12578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:22.128550   12578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:22.129064   12578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:22.130551   12578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:22.130965   12578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:22.132436   12578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:24.636526  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:24.647769  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:24.666800  687772 logs.go:282] 0 containers: []
	W1223 00:04:24.666823  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:24.666873  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:24.685078  687772 logs.go:282] 0 containers: []
	W1223 00:04:24.685100  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:24.685153  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:24.703219  687772 logs.go:282] 0 containers: []
	W1223 00:04:24.703238  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:24.703287  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:24.721619  687772 logs.go:282] 0 containers: []
	W1223 00:04:24.721647  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:24.721705  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:24.740548  687772 logs.go:282] 0 containers: []
	W1223 00:04:24.740570  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:24.740632  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:24.758544  687772 logs.go:282] 0 containers: []
	W1223 00:04:24.758568  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:24.758633  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:24.776285  687772 logs.go:282] 0 containers: []
	W1223 00:04:24.776317  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:24.776445  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:24.794360  687772 logs.go:282] 0 containers: []
	W1223 00:04:24.794386  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:24.794399  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:24.794413  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:24.840111  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:24.840142  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:24.860260  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:24.860286  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:24.915702  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:24.908230   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:24.908821   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:24.910346   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:24.910801   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:24.912322   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:24.908230   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:24.908821   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:24.910346   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:24.910801   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:24.912322   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:24.915723  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:24.915736  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:24.934368  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:24.934394  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:27.463653  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:27.474997  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:27.494098  687772 logs.go:282] 0 containers: []
	W1223 00:04:27.494127  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:27.494183  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:27.513771  687772 logs.go:282] 0 containers: []
	W1223 00:04:27.513799  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:27.513855  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:27.534688  687772 logs.go:282] 0 containers: []
	W1223 00:04:27.534720  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:27.534777  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:27.553043  687772 logs.go:282] 0 containers: []
	W1223 00:04:27.553065  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:27.553115  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:27.571979  687772 logs.go:282] 0 containers: []
	W1223 00:04:27.572005  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:27.572049  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:27.590357  687772 logs.go:282] 0 containers: []
	W1223 00:04:27.590376  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:27.590419  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:27.609465  687772 logs.go:282] 0 containers: []
	W1223 00:04:27.609490  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:27.609547  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:27.628214  687772 logs.go:282] 0 containers: []
	W1223 00:04:27.628238  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:27.628253  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:27.628267  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:27.646519  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:27.646545  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:27.674935  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:27.674958  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:27.721277  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:27.721306  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:27.741140  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:27.741165  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:27.796676  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:27.789709   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:27.790246   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:27.791825   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:27.792273   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:27.793752   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:27.789709   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:27.790246   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:27.791825   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:27.792273   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:27.793752   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:30.297779  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:30.308987  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:30.327806  687772 logs.go:282] 0 containers: []
	W1223 00:04:30.327827  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:30.327885  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:30.347142  687772 logs.go:282] 0 containers: []
	W1223 00:04:30.347165  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:30.347216  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:30.365629  687772 logs.go:282] 0 containers: []
	W1223 00:04:30.365656  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:30.365729  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:30.383470  687772 logs.go:282] 0 containers: []
	W1223 00:04:30.383496  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:30.383552  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:30.402127  687772 logs.go:282] 0 containers: []
	W1223 00:04:30.402152  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:30.402214  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:30.420681  687772 logs.go:282] 0 containers: []
	W1223 00:04:30.420706  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:30.420757  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:30.439453  687772 logs.go:282] 0 containers: []
	W1223 00:04:30.439475  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:30.439517  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:30.458669  687772 logs.go:282] 0 containers: []
	W1223 00:04:30.458691  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:30.458702  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:30.458713  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:30.505022  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:30.505050  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:30.528295  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:30.528323  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:30.585055  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:30.577823   13062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:30.578469   13062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:30.580017   13062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:30.580458   13062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:30.582038   13062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:30.577823   13062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:30.578469   13062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:30.580017   13062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:30.580458   13062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:30.582038   13062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:30.585076  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:30.585088  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:30.604200  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:30.604229  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:33.131779  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:33.143670  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:33.163179  687772 logs.go:282] 0 containers: []
	W1223 00:04:33.163200  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:33.163245  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:33.182970  687772 logs.go:282] 0 containers: []
	W1223 00:04:33.182992  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:33.183043  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:33.201569  687772 logs.go:282] 0 containers: []
	W1223 00:04:33.201609  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:33.201656  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:33.219907  687772 logs.go:282] 0 containers: []
	W1223 00:04:33.219931  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:33.219989  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:33.239604  687772 logs.go:282] 0 containers: []
	W1223 00:04:33.239630  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:33.239675  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:33.258182  687772 logs.go:282] 0 containers: []
	W1223 00:04:33.258211  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:33.258263  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:33.277606  687772 logs.go:282] 0 containers: []
	W1223 00:04:33.277632  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:33.277678  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:33.297258  687772 logs.go:282] 0 containers: []
	W1223 00:04:33.297283  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:33.297296  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:33.297312  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:33.344903  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:33.344932  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:33.364742  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:33.364768  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:33.420528  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:33.413527   13221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:33.414059   13221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:33.415546   13221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:33.416007   13221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:33.417495   13221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:33.413527   13221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:33.414059   13221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:33.415546   13221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:33.416007   13221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:33.417495   13221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:33.420549  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:33.420560  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:33.439384  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:33.439411  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:35.968903  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:35.980276  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:35.999444  687772 logs.go:282] 0 containers: []
	W1223 00:04:35.999474  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:35.999534  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:36.018792  687772 logs.go:282] 0 containers: []
	W1223 00:04:36.018819  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:36.018880  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:36.036956  687772 logs.go:282] 0 containers: []
	W1223 00:04:36.036985  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:36.037043  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:36.055239  687772 logs.go:282] 0 containers: []
	W1223 00:04:36.055265  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:36.055315  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:36.073241  687772 logs.go:282] 0 containers: []
	W1223 00:04:36.073272  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:36.073325  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:36.091575  687772 logs.go:282] 0 containers: []
	W1223 00:04:36.091613  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:36.091662  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:36.110369  687772 logs.go:282] 0 containers: []
	W1223 00:04:36.110396  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:36.110448  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:36.128481  687772 logs.go:282] 0 containers: []
	W1223 00:04:36.128505  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:36.128516  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:36.128526  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:36.176492  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:36.176526  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:36.196649  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:36.196675  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:36.253201  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:36.245327   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:36.245908   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:36.247446   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:36.247880   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:36.249674   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:36.245327   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:36.245908   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:36.247446   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:36.247880   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:36.249674   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:36.253224  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:36.253241  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:36.273351  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:36.273379  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:38.804411  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:38.815899  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:38.834644  687772 logs.go:282] 0 containers: []
	W1223 00:04:38.834668  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:38.834713  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:38.853892  687772 logs.go:282] 0 containers: []
	W1223 00:04:38.853919  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:38.853967  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:38.871484  687772 logs.go:282] 0 containers: []
	W1223 00:04:38.871505  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:38.871554  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:38.889803  687772 logs.go:282] 0 containers: []
	W1223 00:04:38.889828  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:38.889879  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:38.909558  687772 logs.go:282] 0 containers: []
	W1223 00:04:38.909586  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:38.909652  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:38.929528  687772 logs.go:282] 0 containers: []
	W1223 00:04:38.929553  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:38.929624  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:38.948153  687772 logs.go:282] 0 containers: []
	W1223 00:04:38.948181  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:38.948241  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:38.966657  687772 logs.go:282] 0 containers: []
	W1223 00:04:38.966679  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:38.966689  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:38.966711  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:38.994610  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:38.994637  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:39.040694  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:39.040722  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:39.060391  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:39.060417  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:39.116169  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:39.108908   13569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:39.109405   13569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:39.111037   13569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:39.111517   13569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:39.113022   13569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:39.108908   13569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:39.109405   13569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:39.111037   13569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:39.111517   13569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:39.113022   13569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:39.116189  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:39.116201  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:41.638009  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:41.650427  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:41.670214  687772 logs.go:282] 0 containers: []
	W1223 00:04:41.670241  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:41.670289  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:41.689539  687772 logs.go:282] 0 containers: []
	W1223 00:04:41.689568  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:41.689651  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:41.708449  687772 logs.go:282] 0 containers: []
	W1223 00:04:41.708472  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:41.708520  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:41.727897  687772 logs.go:282] 0 containers: []
	W1223 00:04:41.727918  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:41.727963  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:41.748169  687772 logs.go:282] 0 containers: []
	W1223 00:04:41.748200  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:41.748252  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:41.767148  687772 logs.go:282] 0 containers: []
	W1223 00:04:41.767172  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:41.767224  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:41.789562  687772 logs.go:282] 0 containers: []
	W1223 00:04:41.789589  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:41.789665  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:41.808259  687772 logs.go:282] 0 containers: []
	W1223 00:04:41.808281  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:41.808292  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:41.808304  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:41.827093  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:41.827120  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:41.854644  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:41.854671  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:41.901960  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:41.901995  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:41.921983  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:41.922011  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:41.978723  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:41.971457   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:41.971976   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:41.973486   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:41.973968   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:41.975679   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:41.971457   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:41.971976   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:41.973486   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:41.973968   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:41.975679   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:44.479583  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:44.491055  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:44.513749  687772 logs.go:282] 0 containers: []
	W1223 00:04:44.513779  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:44.513836  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:44.535619  687772 logs.go:282] 0 containers: []
	W1223 00:04:44.535648  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:44.535722  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:44.555441  687772 logs.go:282] 0 containers: []
	W1223 00:04:44.555464  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:44.555512  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:44.574828  687772 logs.go:282] 0 containers: []
	W1223 00:04:44.574851  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:44.574895  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:44.593270  687772 logs.go:282] 0 containers: []
	W1223 00:04:44.593293  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:44.593350  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:44.612157  687772 logs.go:282] 0 containers: []
	W1223 00:04:44.612182  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:44.612239  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:44.630342  687772 logs.go:282] 0 containers: []
	W1223 00:04:44.630366  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:44.630417  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:44.648864  687772 logs.go:282] 0 containers: []
	W1223 00:04:44.648893  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:44.648905  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:44.648917  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:44.698462  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:44.698494  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:44.718432  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:44.718463  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:44.777738  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:44.767938   13879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:44.768520   13879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:44.770129   13879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:44.770658   13879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:44.773585   13879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:44.767938   13879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:44.768520   13879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:44.770129   13879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:44.770658   13879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:44.773585   13879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:44.777764  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:44.777781  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:44.798488  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:44.798522  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:47.328787  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:47.340091  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:47.359764  687772 logs.go:282] 0 containers: []
	W1223 00:04:47.359786  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:47.359834  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:47.378531  687772 logs.go:282] 0 containers: []
	W1223 00:04:47.378557  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:47.378633  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:47.397279  687772 logs.go:282] 0 containers: []
	W1223 00:04:47.397303  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:47.397351  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:47.415379  687772 logs.go:282] 0 containers: []
	W1223 00:04:47.415404  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:47.415449  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:47.433342  687772 logs.go:282] 0 containers: []
	W1223 00:04:47.433363  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:47.433407  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:47.452134  687772 logs.go:282] 0 containers: []
	W1223 00:04:47.452153  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:47.452195  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:47.470492  687772 logs.go:282] 0 containers: []
	W1223 00:04:47.470514  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:47.470565  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:47.489435  687772 logs.go:282] 0 containers: []
	W1223 00:04:47.489462  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:47.489475  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:47.489490  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:47.543310  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:47.543341  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:47.563678  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:47.563716  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:47.618877  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:47.611492   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:47.612043   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:47.613658   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:47.614136   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:47.615686   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:47.611492   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:47.612043   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:47.613658   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:47.614136   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:47.615686   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:47.618902  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:47.618916  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:47.637117  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:47.637142  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:50.165288  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:50.176485  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:50.195504  687772 logs.go:282] 0 containers: []
	W1223 00:04:50.195530  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:50.195573  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:50.214411  687772 logs.go:282] 0 containers: []
	W1223 00:04:50.214435  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:50.214486  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:50.232050  687772 logs.go:282] 0 containers: []
	W1223 00:04:50.232073  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:50.232113  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:50.249723  687772 logs.go:282] 0 containers: []
	W1223 00:04:50.249747  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:50.249805  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:50.269197  687772 logs.go:282] 0 containers: []
	W1223 00:04:50.269220  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:50.269262  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:50.287018  687772 logs.go:282] 0 containers: []
	W1223 00:04:50.287042  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:50.287084  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:50.304852  687772 logs.go:282] 0 containers: []
	W1223 00:04:50.304876  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:50.304923  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:50.323126  687772 logs.go:282] 0 containers: []
	W1223 00:04:50.323150  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:50.323164  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:50.323177  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:50.371303  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:50.371328  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:50.391396  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:50.391419  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:50.446479  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:50.439351   14214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:50.440054   14214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:50.441636   14214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:50.442091   14214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:50.443655   14214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:50.439351   14214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:50.440054   14214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:50.441636   14214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:50.442091   14214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:50.443655   14214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:50.446503  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:50.446519  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:50.466869  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:50.466895  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:53.004783  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:53.016488  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:53.037102  687772 logs.go:282] 0 containers: []
	W1223 00:04:53.037130  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:53.037175  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:53.056487  687772 logs.go:282] 0 containers: []
	W1223 00:04:53.056509  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:53.056551  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:53.074919  687772 logs.go:282] 0 containers: []
	W1223 00:04:53.074938  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:53.074983  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:53.093142  687772 logs.go:282] 0 containers: []
	W1223 00:04:53.093163  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:53.093203  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:53.112007  687772 logs.go:282] 0 containers: []
	W1223 00:04:53.112030  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:53.112079  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:53.130737  687772 logs.go:282] 0 containers: []
	W1223 00:04:53.130759  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:53.130802  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:53.149980  687772 logs.go:282] 0 containers: []
	W1223 00:04:53.150009  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:53.150057  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:53.167468  687772 logs.go:282] 0 containers: []
	W1223 00:04:53.167493  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:53.167503  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:53.167513  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:53.195775  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:53.195800  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:53.243212  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:53.243238  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:53.263047  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:53.263073  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:53.319009  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:53.311761   14403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:53.312309   14403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:53.313970   14403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:53.314449   14403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:53.315934   14403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:53.311761   14403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:53.312309   14403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:53.313970   14403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:53.314449   14403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:53.315934   14403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:53.319029  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:53.319041  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:55.838963  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:55.850169  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:55.868811  687772 logs.go:282] 0 containers: []
	W1223 00:04:55.868833  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:55.868878  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:55.887281  687772 logs.go:282] 0 containers: []
	W1223 00:04:55.887309  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:55.887361  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:55.905343  687772 logs.go:282] 0 containers: []
	W1223 00:04:55.905372  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:55.905425  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:55.922787  687772 logs.go:282] 0 containers: []
	W1223 00:04:55.922811  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:55.922858  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:55.941063  687772 logs.go:282] 0 containers: []
	W1223 00:04:55.941090  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:55.941143  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:55.960388  687772 logs.go:282] 0 containers: []
	W1223 00:04:55.960413  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:55.960549  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:55.978787  687772 logs.go:282] 0 containers: []
	W1223 00:04:55.978810  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:55.978854  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:55.996489  687772 logs.go:282] 0 containers: []
	W1223 00:04:55.996516  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:55.996530  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:55.996542  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:56.048197  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:56.048229  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:56.068640  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:56.068668  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:56.124436  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:56.117357   14557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:56.117926   14557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:56.119480   14557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:56.119940   14557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:56.121489   14557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:56.117357   14557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:56.117926   14557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:56.119480   14557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:56.119940   14557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:56.121489   14557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:56.124461  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:56.124478  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:56.143079  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:56.143102  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:58.672032  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:58.683539  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:58.702739  687772 logs.go:282] 0 containers: []
	W1223 00:04:58.702762  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:58.702814  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:58.721434  687772 logs.go:282] 0 containers: []
	W1223 00:04:58.721465  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:58.721514  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:58.741740  687772 logs.go:282] 0 containers: []
	W1223 00:04:58.741768  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:58.741811  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:58.760960  687772 logs.go:282] 0 containers: []
	W1223 00:04:58.760982  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:58.761035  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:58.780979  687772 logs.go:282] 0 containers: []
	W1223 00:04:58.781001  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:58.781045  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:58.799417  687772 logs.go:282] 0 containers: []
	W1223 00:04:58.799453  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:58.799501  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:58.817985  687772 logs.go:282] 0 containers: []
	W1223 00:04:58.818007  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:58.818051  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:58.837633  687772 logs.go:282] 0 containers: []
	W1223 00:04:58.837659  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:58.837671  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:58.837683  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:58.856421  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:58.856448  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:58.883550  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:58.883574  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:58.932130  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:58.932158  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:58.953160  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:58.953189  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:59.009951  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:59.002105   14731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:59.002736   14731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:59.004318   14731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:59.004809   14731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:59.006430   14731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:59.002105   14731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:59.002736   14731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:59.004318   14731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:59.004809   14731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:59.006430   14731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:01.512529  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:01.523921  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:01.542499  687772 logs.go:282] 0 containers: []
	W1223 00:05:01.542525  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:01.542569  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:01.560824  687772 logs.go:282] 0 containers: []
	W1223 00:05:01.560850  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:01.560892  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:01.578994  687772 logs.go:282] 0 containers: []
	W1223 00:05:01.579017  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:01.579060  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:01.597267  687772 logs.go:282] 0 containers: []
	W1223 00:05:01.597293  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:01.597346  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:01.615860  687772 logs.go:282] 0 containers: []
	W1223 00:05:01.615880  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:01.615919  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:01.635022  687772 logs.go:282] 0 containers: []
	W1223 00:05:01.635045  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:01.635084  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:01.654257  687772 logs.go:282] 0 containers: []
	W1223 00:05:01.654282  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:01.654338  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:01.672470  687772 logs.go:282] 0 containers: []
	W1223 00:05:01.672492  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:01.672502  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:01.672513  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:01.720496  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:01.720525  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:01.740698  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:01.740724  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:01.800538  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:01.793437   14884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:01.794074   14884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:01.795170   14884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:01.795640   14884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:01.797188   14884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:01.793437   14884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:01.794074   14884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:01.795170   14884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:01.795640   14884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:01.797188   14884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:01.800562  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:01.800579  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:01.820265  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:01.820291  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:04.348938  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:04.360190  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:04.379095  687772 logs.go:282] 0 containers: []
	W1223 00:05:04.379124  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:04.379177  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:04.396991  687772 logs.go:282] 0 containers: []
	W1223 00:05:04.397012  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:04.397057  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:04.415658  687772 logs.go:282] 0 containers: []
	W1223 00:05:04.415682  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:04.415750  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:04.434023  687772 logs.go:282] 0 containers: []
	W1223 00:05:04.434049  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:04.434093  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:04.452721  687772 logs.go:282] 0 containers: []
	W1223 00:05:04.452744  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:04.452791  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:04.471221  687772 logs.go:282] 0 containers: []
	W1223 00:05:04.471247  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:04.471294  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:04.489656  687772 logs.go:282] 0 containers: []
	W1223 00:05:04.489685  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:04.489734  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:04.508637  687772 logs.go:282] 0 containers: []
	W1223 00:05:04.508669  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:04.508689  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:04.508702  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:04.526928  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:04.526953  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:04.553896  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:04.553923  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:04.602972  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:04.602999  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:04.622788  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:04.622812  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:04.678232  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:04.670559   15068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:04.671188   15068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:04.672874   15068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:04.673311   15068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:04.675032   15068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:04.670559   15068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:04.671188   15068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:04.672874   15068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:04.673311   15068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:04.675032   15068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:07.179923  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:07.191963  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:07.211239  687772 logs.go:282] 0 containers: []
	W1223 00:05:07.211263  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:07.211304  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:07.230281  687772 logs.go:282] 0 containers: []
	W1223 00:05:07.230302  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:07.230343  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:07.249365  687772 logs.go:282] 0 containers: []
	W1223 00:05:07.249391  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:07.249443  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:07.269410  687772 logs.go:282] 0 containers: []
	W1223 00:05:07.269431  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:07.269484  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:07.288681  687772 logs.go:282] 0 containers: []
	W1223 00:05:07.288711  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:07.288756  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:07.307722  687772 logs.go:282] 0 containers: []
	W1223 00:05:07.307742  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:07.307785  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:07.324479  687772 logs.go:282] 0 containers: []
	W1223 00:05:07.324503  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:07.324557  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:07.343010  687772 logs.go:282] 0 containers: []
	W1223 00:05:07.343030  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:07.343041  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:07.343056  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:07.370090  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:07.370116  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:07.416268  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:07.416294  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:07.436063  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:07.436088  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:07.492624  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:07.485207   15232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:07.485853   15232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:07.487473   15232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:07.488003   15232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:07.489566   15232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:07.485207   15232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:07.485853   15232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:07.487473   15232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:07.488003   15232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:07.489566   15232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:07.492650  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:07.492667  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:10.011735  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:10.025412  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:10.046816  687772 logs.go:282] 0 containers: []
	W1223 00:05:10.046848  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:10.046917  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:10.065664  687772 logs.go:282] 0 containers: []
	W1223 00:05:10.065693  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:10.065752  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:10.084486  687772 logs.go:282] 0 containers: []
	W1223 00:05:10.084512  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:10.084569  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:10.103489  687772 logs.go:282] 0 containers: []
	W1223 00:05:10.103510  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:10.103563  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:10.121383  687772 logs.go:282] 0 containers: []
	W1223 00:05:10.121413  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:10.121457  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:10.139817  687772 logs.go:282] 0 containers: []
	W1223 00:05:10.139840  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:10.139883  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:10.158123  687772 logs.go:282] 0 containers: []
	W1223 00:05:10.158142  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:10.158195  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:10.176690  687772 logs.go:282] 0 containers: []
	W1223 00:05:10.176714  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:10.176728  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:10.176743  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:10.221786  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:10.221818  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:10.241642  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:10.241670  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:10.306092  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:10.298846   15375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:10.299321   15375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:10.300928   15375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:10.301392   15375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:10.302935   15375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:10.298846   15375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:10.299321   15375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:10.300928   15375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:10.301392   15375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:10.302935   15375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:10.306110  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:10.306122  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:10.325227  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:10.325254  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:12.853199  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:12.864559  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:12.883528  687772 logs.go:282] 0 containers: []
	W1223 00:05:12.883553  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:12.883615  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:12.901914  687772 logs.go:282] 0 containers: []
	W1223 00:05:12.901946  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:12.902003  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:12.920676  687772 logs.go:282] 0 containers: []
	W1223 00:05:12.920703  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:12.920746  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:12.938812  687772 logs.go:282] 0 containers: []
	W1223 00:05:12.938840  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:12.938898  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:12.956564  687772 logs.go:282] 0 containers: []
	W1223 00:05:12.956588  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:12.956651  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:12.975030  687772 logs.go:282] 0 containers: []
	W1223 00:05:12.975056  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:12.975112  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:12.992748  687772 logs.go:282] 0 containers: []
	W1223 00:05:12.992770  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:12.992819  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:13.013710  687772 logs.go:282] 0 containers: []
	W1223 00:05:13.013733  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:13.013744  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:13.013756  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:13.044889  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:13.044920  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:13.090565  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:13.090611  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:13.110578  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:13.110614  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:13.166048  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:13.158806   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:13.159417   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:13.161011   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:13.161480   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:13.163044   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:13.158806   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:13.159417   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:13.161011   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:13.161480   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:13.163044   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:13.166066  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:13.166079  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:15.685941  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:15.697434  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:15.716560  687772 logs.go:282] 0 containers: []
	W1223 00:05:15.716607  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:15.716664  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:15.735775  687772 logs.go:282] 0 containers: []
	W1223 00:05:15.735799  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:15.735847  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:15.753974  687772 logs.go:282] 0 containers: []
	W1223 00:05:15.753996  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:15.754046  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:15.771763  687772 logs.go:282] 0 containers: []
	W1223 00:05:15.771788  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:15.771846  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:15.790222  687772 logs.go:282] 0 containers: []
	W1223 00:05:15.790249  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:15.790294  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:15.808671  687772 logs.go:282] 0 containers: []
	W1223 00:05:15.808691  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:15.808735  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:15.827295  687772 logs.go:282] 0 containers: []
	W1223 00:05:15.827324  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:15.827377  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:15.845637  687772 logs.go:282] 0 containers: []
	W1223 00:05:15.845658  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:15.845668  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:15.845679  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:15.892975  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:15.893004  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:15.912599  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:15.912626  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:15.967763  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:15.960925   15710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:15.961478   15710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:15.963006   15710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:15.963401   15710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:15.964895   15710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:15.960925   15710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:15.961478   15710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:15.963006   15710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:15.963401   15710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:15.964895   15710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:15.967788  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:15.967801  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:15.986603  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:15.986632  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:18.516732  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:18.529415  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:18.549048  687772 logs.go:282] 0 containers: []
	W1223 00:05:18.549069  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:18.549113  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:18.567672  687772 logs.go:282] 0 containers: []
	W1223 00:05:18.567705  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:18.567771  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:18.586513  687772 logs.go:282] 0 containers: []
	W1223 00:05:18.586538  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:18.586613  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:18.604518  687772 logs.go:282] 0 containers: []
	W1223 00:05:18.604538  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:18.604579  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:18.623446  687772 logs.go:282] 0 containers: []
	W1223 00:05:18.623467  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:18.623510  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:18.642213  687772 logs.go:282] 0 containers: []
	W1223 00:05:18.642230  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:18.642279  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:18.660501  687772 logs.go:282] 0 containers: []
	W1223 00:05:18.660521  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:18.660563  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:18.678846  687772 logs.go:282] 0 containers: []
	W1223 00:05:18.678869  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:18.678882  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:18.678893  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:18.727936  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:18.727965  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:18.749033  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:18.749059  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:18.804351  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:18.796992   15875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:18.797493   15875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:18.799074   15875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:18.799516   15875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:18.801045   15875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:18.796992   15875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:18.797493   15875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:18.799074   15875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:18.799516   15875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:18.801045   15875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:18.804386  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:18.804401  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:18.822650  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:18.822681  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:21.351938  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:21.363094  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:21.382091  687772 logs.go:282] 0 containers: []
	W1223 00:05:21.382123  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:21.382179  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:21.400790  687772 logs.go:282] 0 containers: []
	W1223 00:05:21.400813  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:21.400861  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:21.418989  687772 logs.go:282] 0 containers: []
	W1223 00:05:21.419014  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:21.419060  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:21.437814  687772 logs.go:282] 0 containers: []
	W1223 00:05:21.437839  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:21.437898  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:21.456967  687772 logs.go:282] 0 containers: []
	W1223 00:05:21.456991  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:21.457045  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:21.475541  687772 logs.go:282] 0 containers: []
	W1223 00:05:21.475566  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:21.475644  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:21.494493  687772 logs.go:282] 0 containers: []
	W1223 00:05:21.494518  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:21.494576  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:21.513952  687772 logs.go:282] 0 containers: []
	W1223 00:05:21.513979  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:21.513990  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:21.514001  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:21.563253  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:21.563283  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:21.583663  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:21.583693  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:21.638754  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:21.631703   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:21.632235   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:21.633835   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:21.634263   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:21.635800   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:21.631703   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:21.632235   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:21.633835   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:21.634263   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:21.635800   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:21.638774  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:21.638786  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:21.657674  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:21.657704  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:24.188905  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:24.200277  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:24.220108  687772 logs.go:282] 0 containers: []
	W1223 00:05:24.220133  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:24.220188  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:24.240286  687772 logs.go:282] 0 containers: []
	W1223 00:05:24.240307  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:24.240351  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:24.260644  687772 logs.go:282] 0 containers: []
	W1223 00:05:24.260670  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:24.260724  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:24.282918  687772 logs.go:282] 0 containers: []
	W1223 00:05:24.282943  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:24.282990  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:24.302929  687772 logs.go:282] 0 containers: []
	W1223 00:05:24.302956  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:24.303013  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:24.322124  687772 logs.go:282] 0 containers: []
	W1223 00:05:24.322145  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:24.322196  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:24.340965  687772 logs.go:282] 0 containers: []
	W1223 00:05:24.340993  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:24.341050  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:24.360121  687772 logs.go:282] 0 containers: []
	W1223 00:05:24.360148  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:24.360162  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:24.360177  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:24.406776  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:24.406809  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:24.428882  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:24.428909  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:24.484257  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:24.477184   16205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:24.477734   16205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:24.479261   16205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:24.479752   16205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:24.481241   16205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:24.477184   16205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:24.477734   16205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:24.479261   16205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:24.479752   16205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:24.481241   16205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:24.484286  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:24.484304  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:24.504724  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:24.504752  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:27.038561  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:27.050259  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:27.069265  687772 logs.go:282] 0 containers: []
	W1223 00:05:27.069288  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:27.069333  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:27.088081  687772 logs.go:282] 0 containers: []
	W1223 00:05:27.088108  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:27.088171  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:27.107172  687772 logs.go:282] 0 containers: []
	W1223 00:05:27.107198  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:27.107246  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:27.125773  687772 logs.go:282] 0 containers: []
	W1223 00:05:27.125804  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:27.125862  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:27.144259  687772 logs.go:282] 0 containers: []
	W1223 00:05:27.144282  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:27.144339  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:27.163197  687772 logs.go:282] 0 containers: []
	W1223 00:05:27.163217  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:27.163263  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:27.181942  687772 logs.go:282] 0 containers: []
	W1223 00:05:27.181971  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:27.182030  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:27.199936  687772 logs.go:282] 0 containers: []
	W1223 00:05:27.199964  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:27.199980  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:27.199996  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:27.218431  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:27.218456  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:27.246756  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:27.246783  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:27.297557  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:27.297603  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:27.318177  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:27.318205  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:27.374968  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:27.367760   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:27.368359   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:27.369972   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:27.370370   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:27.371924   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:27.367760   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:27.368359   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:27.369972   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:27.370370   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:27.371924   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:29.875712  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:29.887100  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:29.906809  687772 logs.go:282] 0 containers: []
	W1223 00:05:29.906834  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:29.906892  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:29.926388  687772 logs.go:282] 0 containers: []
	W1223 00:05:29.926414  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:29.926467  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:29.946220  687772 logs.go:282] 0 containers: []
	W1223 00:05:29.946248  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:29.946302  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:29.967102  687772 logs.go:282] 0 containers: []
	W1223 00:05:29.967131  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:29.967188  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:29.986540  687772 logs.go:282] 0 containers: []
	W1223 00:05:29.986564  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:29.986631  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:30.004809  687772 logs.go:282] 0 containers: []
	W1223 00:05:30.004835  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:30.004881  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:30.023625  687772 logs.go:282] 0 containers: []
	W1223 00:05:30.023655  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:30.023711  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:30.042067  687772 logs.go:282] 0 containers: []
	W1223 00:05:30.042089  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:30.042100  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:30.042120  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:30.061885  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:30.061913  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:30.090401  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:30.090432  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:30.138962  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:30.138993  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:30.159224  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:30.159250  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:30.216295  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:30.208699   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:30.209372   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:30.211074   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:30.211516   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:30.213098   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:30.208699   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:30.209372   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:30.211074   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:30.211516   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:30.213098   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:32.716974  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:32.728432  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:32.748217  687772 logs.go:282] 0 containers: []
	W1223 00:05:32.748245  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:32.748292  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:32.767866  687772 logs.go:282] 0 containers: []
	W1223 00:05:32.767887  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:32.767935  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:32.788690  687772 logs.go:282] 0 containers: []
	W1223 00:05:32.788723  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:32.788782  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:32.808366  687772 logs.go:282] 0 containers: []
	W1223 00:05:32.808397  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:32.808460  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:32.827631  687772 logs.go:282] 0 containers: []
	W1223 00:05:32.827655  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:32.827714  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:32.846429  687772 logs.go:282] 0 containers: []
	W1223 00:05:32.846456  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:32.846511  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:32.865177  687772 logs.go:282] 0 containers: []
	W1223 00:05:32.865202  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:32.865258  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:32.885235  687772 logs.go:282] 0 containers: []
	W1223 00:05:32.885258  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:32.885268  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:32.885280  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:32.905218  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:32.905245  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:32.960860  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:32.953652   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:32.954228   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:32.955802   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:32.956269   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:32.957894   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:32.953652   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:32.954228   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:32.955802   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:32.956269   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:32.957894   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:32.960885  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:32.960905  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:32.979917  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:32.979943  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:33.008187  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:33.008218  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:35.555359  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:35.566888  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:35.586562  687772 logs.go:282] 0 containers: []
	W1223 00:05:35.586588  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:35.586657  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:35.605495  687772 logs.go:282] 0 containers: []
	W1223 00:05:35.605522  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:35.605579  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:35.624671  687772 logs.go:282] 0 containers: []
	W1223 00:05:35.624700  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:35.624760  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:35.643198  687772 logs.go:282] 0 containers: []
	W1223 00:05:35.643222  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:35.643278  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:35.662223  687772 logs.go:282] 0 containers: []
	W1223 00:05:35.662245  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:35.662290  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:35.681991  687772 logs.go:282] 0 containers: []
	W1223 00:05:35.682016  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:35.682071  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:35.700985  687772 logs.go:282] 0 containers: []
	W1223 00:05:35.701009  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:35.701062  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:35.719976  687772 logs.go:282] 0 containers: []
	W1223 00:05:35.720000  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:35.720015  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:35.720029  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:35.767694  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:35.767728  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:35.792896  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:35.792935  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:35.849448  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:35.842971   16872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:35.843476   16872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:35.845024   16872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:35.845404   16872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:35.846511   16872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:35.842971   16872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:35.843476   16872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:35.845024   16872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:35.845404   16872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:35.846511   16872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:35.849470  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:35.849491  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:35.868248  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:35.868274  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:38.397175  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:38.408856  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:38.428054  687772 logs.go:282] 0 containers: []
	W1223 00:05:38.428085  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:38.428141  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:38.447350  687772 logs.go:282] 0 containers: []
	W1223 00:05:38.447376  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:38.447428  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:38.466426  687772 logs.go:282] 0 containers: []
	W1223 00:05:38.466455  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:38.466512  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:38.486074  687772 logs.go:282] 0 containers: []
	W1223 00:05:38.486104  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:38.486173  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:38.505584  687772 logs.go:282] 0 containers: []
	W1223 00:05:38.505626  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:38.505709  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:38.527387  687772 logs.go:282] 0 containers: []
	W1223 00:05:38.527416  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:38.527473  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:38.547928  687772 logs.go:282] 0 containers: []
	W1223 00:05:38.547955  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:38.548015  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:38.568237  687772 logs.go:282] 0 containers: []
	W1223 00:05:38.568262  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:38.568274  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:38.568285  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:38.616522  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:38.616555  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:38.638676  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:38.638707  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:38.694984  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:38.687773   17030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:38.688337   17030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:38.689839   17030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:38.690288   17030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:38.691876   17030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:38.687773   17030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:38.688337   17030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:38.689839   17030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:38.690288   17030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:38.691876   17030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:38.695006  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:38.695019  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:38.713940  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:38.713969  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:41.244859  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:41.256283  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:41.275201  687772 logs.go:282] 0 containers: []
	W1223 00:05:41.275233  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:41.275280  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:41.295272  687772 logs.go:282] 0 containers: []
	W1223 00:05:41.295299  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:41.295353  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:41.313039  687772 logs.go:282] 0 containers: []
	W1223 00:05:41.313069  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:41.313135  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:41.331394  687772 logs.go:282] 0 containers: []
	W1223 00:05:41.331418  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:41.331491  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:41.350556  687772 logs.go:282] 0 containers: []
	W1223 00:05:41.350583  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:41.350650  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:41.369215  687772 logs.go:282] 0 containers: []
	W1223 00:05:41.369242  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:41.369290  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:41.387799  687772 logs.go:282] 0 containers: []
	W1223 00:05:41.387826  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:41.387877  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:41.406760  687772 logs.go:282] 0 containers: []
	W1223 00:05:41.406785  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:41.406799  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:41.406813  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:41.453518  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:41.453548  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:41.473671  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:41.473700  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:41.531098  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:41.523365   17203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:41.523912   17203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:41.525536   17203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:41.526073   17203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:41.527560   17203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:41.523365   17203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:41.523912   17203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:41.525536   17203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:41.526073   17203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:41.527560   17203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:41.531124  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:41.531139  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:41.551968  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:41.551997  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:44.081115  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:44.092382  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:44.111299  687772 logs.go:282] 0 containers: []
	W1223 00:05:44.111326  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:44.111381  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:44.130168  687772 logs.go:282] 0 containers: []
	W1223 00:05:44.130196  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:44.130250  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:44.149028  687772 logs.go:282] 0 containers: []
	W1223 00:05:44.149052  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:44.149109  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:44.167326  687772 logs.go:282] 0 containers: []
	W1223 00:05:44.167346  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:44.167388  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:44.185875  687772 logs.go:282] 0 containers: []
	W1223 00:05:44.185898  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:44.185949  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:44.205297  687772 logs.go:282] 0 containers: []
	W1223 00:05:44.205320  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:44.205370  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:44.224561  687772 logs.go:282] 0 containers: []
	W1223 00:05:44.224608  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:44.224661  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:44.242760  687772 logs.go:282] 0 containers: []
	W1223 00:05:44.242782  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:44.242795  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:44.242808  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:44.290363  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:44.290399  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:44.310780  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:44.310806  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:44.367913  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:44.360501   17368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:44.361124   17368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:44.362755   17368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:44.363237   17368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:44.364761   17368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:44.360501   17368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:44.361124   17368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:44.362755   17368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:44.363237   17368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:44.364761   17368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:44.367931  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:44.367945  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:44.387052  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:44.387080  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:46.916305  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:46.927926  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:46.946856  687772 logs.go:282] 0 containers: []
	W1223 00:05:46.946882  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:46.946941  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:46.965651  687772 logs.go:282] 0 containers: []
	W1223 00:05:46.965674  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:46.965720  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:46.984835  687772 logs.go:282] 0 containers: []
	W1223 00:05:46.984863  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:46.984920  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:47.005005  687772 logs.go:282] 0 containers: []
	W1223 00:05:47.005033  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:47.005095  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:47.026916  687772 logs.go:282] 0 containers: []
	W1223 00:05:47.026948  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:47.026996  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:47.047971  687772 logs.go:282] 0 containers: []
	W1223 00:05:47.048003  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:47.048064  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:47.067344  687772 logs.go:282] 0 containers: []
	W1223 00:05:47.067372  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:47.067424  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:47.087055  687772 logs.go:282] 0 containers: []
	W1223 00:05:47.087079  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:47.087093  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:47.087107  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:47.134052  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:47.134085  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:47.154446  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:47.154479  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:47.210710  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:47.203541   17534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:47.204152   17534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:47.205769   17534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:47.206170   17534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:47.207683   17534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:47.203541   17534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:47.204152   17534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:47.205769   17534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:47.206170   17534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:47.207683   17534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:47.210734  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:47.210746  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:47.230988  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:47.231017  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:49.759465  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:49.771325  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:49.791131  687772 logs.go:282] 0 containers: []
	W1223 00:05:49.791160  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:49.791219  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:49.810792  687772 logs.go:282] 0 containers: []
	W1223 00:05:49.810814  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:49.810859  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:49.829432  687772 logs.go:282] 0 containers: []
	W1223 00:05:49.829454  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:49.829499  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:49.847527  687772 logs.go:282] 0 containers: []
	W1223 00:05:49.847548  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:49.847603  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:49.866252  687772 logs.go:282] 0 containers: []
	W1223 00:05:49.866275  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:49.866315  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:49.885934  687772 logs.go:282] 0 containers: []
	W1223 00:05:49.885955  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:49.885996  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:49.903668  687772 logs.go:282] 0 containers: []
	W1223 00:05:49.903690  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:49.903733  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:49.923276  687772 logs.go:282] 0 containers: []
	W1223 00:05:49.923298  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:49.923309  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:49.923320  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:49.968185  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:49.968217  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:49.988993  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:49.989021  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:50.052060  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:50.045040   17695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:50.045626   17695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:50.047194   17695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:50.047655   17695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:50.049139   17695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:50.045040   17695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:50.045626   17695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:50.047194   17695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:50.047655   17695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:50.049139   17695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:50.052083  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:50.052100  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:50.070860  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:50.070885  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:52.599679  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:52.611289  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:52.629699  687772 logs.go:282] 0 containers: []
	W1223 00:05:52.629724  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:52.629782  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:52.648660  687772 logs.go:282] 0 containers: []
	W1223 00:05:52.648689  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:52.648740  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:52.667204  687772 logs.go:282] 0 containers: []
	W1223 00:05:52.667232  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:52.667287  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:52.685635  687772 logs.go:282] 0 containers: []
	W1223 00:05:52.685667  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:52.685718  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:52.703669  687772 logs.go:282] 0 containers: []
	W1223 00:05:52.703692  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:52.703742  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:52.721467  687772 logs.go:282] 0 containers: []
	W1223 00:05:52.721495  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:52.721553  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:52.739858  687772 logs.go:282] 0 containers: []
	W1223 00:05:52.739885  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:52.739930  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:52.759123  687772 logs.go:282] 0 containers: []
	W1223 00:05:52.759151  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:52.759165  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:52.759178  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:52.812520  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:52.812552  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:52.832551  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:52.832578  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:52.887680  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:52.880327   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:52.880960   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:52.882578   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:52.883148   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:52.884700   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:52.880327   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:52.880960   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:52.882578   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:52.883148   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:52.884700   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:52.887700  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:52.887719  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:52.906246  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:52.906276  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:55.444344  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:55.455763  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:55.475305  687772 logs.go:282] 0 containers: []
	W1223 00:05:55.475332  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:55.475389  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:55.494094  687772 logs.go:282] 0 containers: []
	W1223 00:05:55.494117  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:55.494164  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:55.511874  687772 logs.go:282] 0 containers: []
	W1223 00:05:55.511896  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:55.511942  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:55.530088  687772 logs.go:282] 0 containers: []
	W1223 00:05:55.530113  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:55.530159  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:55.548749  687772 logs.go:282] 0 containers: []
	W1223 00:05:55.548778  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:55.548828  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:55.567179  687772 logs.go:282] 0 containers: []
	W1223 00:05:55.567204  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:55.567269  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:55.586315  687772 logs.go:282] 0 containers: []
	W1223 00:05:55.586343  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:55.586395  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:55.605282  687772 logs.go:282] 0 containers: []
	W1223 00:05:55.605303  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:55.605314  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:55.605327  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:55.624085  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:55.624113  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:55.652038  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:55.652065  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:55.699247  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:55.699274  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:55.719031  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:55.719058  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:55.777078  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:55.769272   18055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:55.769828   18055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:55.771491   18055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:55.771926   18055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:55.773469   18055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:55.769272   18055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:55.769828   18055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:55.771491   18055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:55.771926   18055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:55.773469   18055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:58.278708  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:58.291024  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:58.310944  687772 logs.go:282] 0 containers: []
	W1223 00:05:58.310971  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:58.311027  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:58.329419  687772 logs.go:282] 0 containers: []
	W1223 00:05:58.329443  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:58.329499  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:58.346556  687772 logs.go:282] 0 containers: []
	W1223 00:05:58.346579  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:58.346653  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:58.364565  687772 logs.go:282] 0 containers: []
	W1223 00:05:58.364601  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:58.364653  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:58.383020  687772 logs.go:282] 0 containers: []
	W1223 00:05:58.383043  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:58.383089  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:58.401354  687772 logs.go:282] 0 containers: []
	W1223 00:05:58.401381  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:58.401440  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:58.419356  687772 logs.go:282] 0 containers: []
	W1223 00:05:58.419377  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:58.419426  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:58.438428  687772 logs.go:282] 0 containers: []
	W1223 00:05:58.438449  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:58.438461  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:58.438477  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:58.458325  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:58.458353  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:58.513127  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:58.506001   18204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:58.506523   18204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:58.508086   18204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:58.508549   18204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:58.510071   18204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:58.506001   18204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:58.506523   18204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:58.508086   18204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:58.508549   18204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:58.510071   18204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:58.513156  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:58.513173  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:58.532159  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:58.532183  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:58.559409  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:58.559433  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:01.105933  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:01.117378  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:01.136395  687772 logs.go:282] 0 containers: []
	W1223 00:06:01.136418  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:01.136463  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:01.155037  687772 logs.go:282] 0 containers: []
	W1223 00:06:01.155063  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:01.155111  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:01.173939  687772 logs.go:282] 0 containers: []
	W1223 00:06:01.173960  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:01.174004  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:01.193250  687772 logs.go:282] 0 containers: []
	W1223 00:06:01.193271  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:01.193312  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:01.210927  687772 logs.go:282] 0 containers: []
	W1223 00:06:01.210948  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:01.210990  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:01.229293  687772 logs.go:282] 0 containers: []
	W1223 00:06:01.229319  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:01.229367  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:01.247971  687772 logs.go:282] 0 containers: []
	W1223 00:06:01.247997  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:01.248059  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:01.267642  687772 logs.go:282] 0 containers: []
	W1223 00:06:01.267667  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:01.267688  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:01.267718  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:01.290552  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:01.290581  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:01.346096  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:01.339164   18374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:01.339647   18374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:01.341218   18374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:01.341667   18374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:01.343153   18374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:01.339164   18374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:01.339647   18374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:01.341218   18374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:01.341667   18374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:01.343153   18374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:01.346115  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:01.346127  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:01.364490  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:01.364516  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:01.391895  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:01.391918  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:03.938979  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:03.950393  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:03.969334  687772 logs.go:282] 0 containers: []
	W1223 00:06:03.969364  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:03.969448  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:03.988183  687772 logs.go:282] 0 containers: []
	W1223 00:06:03.988205  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:03.988252  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:04.007742  687772 logs.go:282] 0 containers: []
	W1223 00:06:04.007767  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:04.007821  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:04.027502  687772 logs.go:282] 0 containers: []
	W1223 00:06:04.027528  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:04.027582  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:04.048194  687772 logs.go:282] 0 containers: []
	W1223 00:06:04.048222  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:04.048286  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:04.067020  687772 logs.go:282] 0 containers: []
	W1223 00:06:04.067044  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:04.067096  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:04.085747  687772 logs.go:282] 0 containers: []
	W1223 00:06:04.085776  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:04.085829  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:04.103906  687772 logs.go:282] 0 containers: []
	W1223 00:06:04.103936  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:04.103950  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:04.103963  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:04.131404  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:04.131427  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:04.178862  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:04.178893  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:04.198797  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:04.198823  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:04.255150  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:04.247324   18547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:04.247911   18547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:04.249519   18547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:04.249945   18547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:04.251469   18547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:04.247324   18547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:04.247911   18547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:04.249519   18547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:04.249945   18547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:04.251469   18547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:04.255174  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:04.255190  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:06.777149  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:06.788444  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:06.807818  687772 logs.go:282] 0 containers: []
	W1223 00:06:06.807839  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:06.807881  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:06.827018  687772 logs.go:282] 0 containers: []
	W1223 00:06:06.827044  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:06.827092  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:06.845320  687772 logs.go:282] 0 containers: []
	W1223 00:06:06.845342  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:06.845395  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:06.862837  687772 logs.go:282] 0 containers: []
	W1223 00:06:06.862856  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:06.862907  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:06.880629  687772 logs.go:282] 0 containers: []
	W1223 00:06:06.880649  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:06.880690  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:06.898665  687772 logs.go:282] 0 containers: []
	W1223 00:06:06.898694  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:06.898762  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:06.916571  687772 logs.go:282] 0 containers: []
	W1223 00:06:06.916606  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:06.916662  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:06.934190  687772 logs.go:282] 0 containers: []
	W1223 00:06:06.934213  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:06.934228  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:06.934245  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:06.961869  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:06.961895  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:07.008426  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:07.008460  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:07.033602  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:07.033641  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:07.089432  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:07.082227   18715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:07.082831   18715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:07.084421   18715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:07.084867   18715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:07.086345   18715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:07.082227   18715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:07.082831   18715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:07.084421   18715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:07.084867   18715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:07.086345   18715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:07.089452  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:07.089463  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:09.608089  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:09.619510  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:09.638402  687772 logs.go:282] 0 containers: []
	W1223 00:06:09.638426  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:09.638473  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:09.657218  687772 logs.go:282] 0 containers: []
	W1223 00:06:09.657247  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:09.657292  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:09.675838  687772 logs.go:282] 0 containers: []
	W1223 00:06:09.675871  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:09.675935  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:09.694913  687772 logs.go:282] 0 containers: []
	W1223 00:06:09.694939  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:09.694992  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:09.714024  687772 logs.go:282] 0 containers: []
	W1223 00:06:09.714046  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:09.714097  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:09.733120  687772 logs.go:282] 0 containers: []
	W1223 00:06:09.733142  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:09.733188  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:09.752081  687772 logs.go:282] 0 containers: []
	W1223 00:06:09.752104  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:09.752148  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:09.770630  687772 logs.go:282] 0 containers: []
	W1223 00:06:09.770661  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:09.770676  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:09.770700  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:09.818931  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:09.818967  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:09.839282  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:09.839309  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:09.895206  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:09.888285   18867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:09.888810   18867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:09.890309   18867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:09.890779   18867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:09.891942   18867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:09.888285   18867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:09.888810   18867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:09.890309   18867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:09.890779   18867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:09.891942   18867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:09.895234  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:09.895247  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:09.913965  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:09.913994  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:12.442178  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:12.453355  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:12.472243  687772 logs.go:282] 0 containers: []
	W1223 00:06:12.472267  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:12.472312  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:12.491113  687772 logs.go:282] 0 containers: []
	W1223 00:06:12.491136  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:12.491192  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:12.511291  687772 logs.go:282] 0 containers: []
	W1223 00:06:12.511317  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:12.511376  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:12.532112  687772 logs.go:282] 0 containers: []
	W1223 00:06:12.532141  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:12.532196  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:12.551226  687772 logs.go:282] 0 containers: []
	W1223 00:06:12.551250  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:12.551293  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:12.569426  687772 logs.go:282] 0 containers: []
	W1223 00:06:12.569449  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:12.569504  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:12.588494  687772 logs.go:282] 0 containers: []
	W1223 00:06:12.588520  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:12.588569  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:12.606610  687772 logs.go:282] 0 containers: []
	W1223 00:06:12.606644  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:12.606657  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:12.606674  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:12.634113  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:12.634143  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:12.681112  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:12.681140  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:12.700711  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:12.700736  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:12.757239  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:12.749780   19051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:12.750485   19051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:12.752070   19051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:12.752530   19051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:12.754079   19051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:12.749780   19051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:12.750485   19051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:12.752070   19051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:12.752530   19051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:12.754079   19051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:12.757259  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:12.757273  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:15.278124  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:15.290283  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:15.309406  687772 logs.go:282] 0 containers: []
	W1223 00:06:15.309433  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:15.309481  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:15.328093  687772 logs.go:282] 0 containers: []
	W1223 00:06:15.328119  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:15.328173  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:15.346922  687772 logs.go:282] 0 containers: []
	W1223 00:06:15.346949  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:15.347006  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:15.364932  687772 logs.go:282] 0 containers: []
	W1223 00:06:15.364960  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:15.365013  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:15.383120  687772 logs.go:282] 0 containers: []
	W1223 00:06:15.383144  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:15.383188  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:15.401332  687772 logs.go:282] 0 containers: []
	W1223 00:06:15.401355  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:15.401404  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:15.419961  687772 logs.go:282] 0 containers: []
	W1223 00:06:15.419986  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:15.420037  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:15.438746  687772 logs.go:282] 0 containers: []
	W1223 00:06:15.438769  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:15.438780  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:15.438793  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:15.486016  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:15.486044  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:15.506911  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:15.506939  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:15.566808  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:15.559320   19201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:15.559937   19201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:15.561686   19201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:15.562103   19201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:15.563650   19201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:15.559320   19201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:15.559937   19201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:15.561686   19201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:15.562103   19201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:15.563650   19201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:15.566826  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:15.566836  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:15.586013  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:15.586040  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:18.115753  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:18.127221  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:18.146018  687772 logs.go:282] 0 containers: []
	W1223 00:06:18.146048  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:18.146094  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:18.165274  687772 logs.go:282] 0 containers: []
	W1223 00:06:18.165294  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:18.165337  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:18.183880  687772 logs.go:282] 0 containers: []
	W1223 00:06:18.183904  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:18.183947  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:18.202061  687772 logs.go:282] 0 containers: []
	W1223 00:06:18.202082  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:18.202130  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:18.219858  687772 logs.go:282] 0 containers: []
	W1223 00:06:18.219892  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:18.219945  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:18.238966  687772 logs.go:282] 0 containers: []
	W1223 00:06:18.238987  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:18.239032  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:18.260921  687772 logs.go:282] 0 containers: []
	W1223 00:06:18.260949  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:18.260997  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:18.280705  687772 logs.go:282] 0 containers: []
	W1223 00:06:18.280735  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:18.280750  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:18.280764  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:18.299732  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:18.299756  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:18.327603  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:18.327631  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:18.375722  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:18.375749  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:18.397572  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:18.397611  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:18.454135  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:18.447039   19376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:18.447614   19376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:18.449142   19376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:18.449559   19376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:18.451077   19376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:18.447039   19376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:18.447614   19376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:18.449142   19376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:18.449559   19376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:18.451077   19376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:20.955833  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:20.967309  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:20.986237  687772 logs.go:282] 0 containers: []
	W1223 00:06:20.986258  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:20.986301  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:21.004350  687772 logs.go:282] 0 containers: []
	W1223 00:06:21.004377  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:21.004434  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:21.022893  687772 logs.go:282] 0 containers: []
	W1223 00:06:21.022919  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:21.022974  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:21.042421  687772 logs.go:282] 0 containers: []
	W1223 00:06:21.042441  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:21.042484  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:21.061267  687772 logs.go:282] 0 containers: []
	W1223 00:06:21.061293  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:21.061355  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:21.079988  687772 logs.go:282] 0 containers: []
	W1223 00:06:21.080011  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:21.080064  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:21.098196  687772 logs.go:282] 0 containers: []
	W1223 00:06:21.098225  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:21.098279  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:21.117158  687772 logs.go:282] 0 containers: []
	W1223 00:06:21.117180  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:21.117191  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:21.117202  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:21.146189  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:21.146215  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:21.192645  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:21.192677  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:21.212689  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:21.212716  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:21.269438  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:21.261783   19545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:21.262320   19545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:21.263990   19545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:21.264498   19545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:21.266022   19545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:21.261783   19545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:21.262320   19545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:21.263990   19545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:21.264498   19545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:21.266022   19545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:21.269462  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:21.269480  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:23.789716  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:23.801130  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:23.820155  687772 logs.go:282] 0 containers: []
	W1223 00:06:23.820180  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:23.820239  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:23.838850  687772 logs.go:282] 0 containers: []
	W1223 00:06:23.838875  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:23.838919  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:23.856860  687772 logs.go:282] 0 containers: []
	W1223 00:06:23.856881  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:23.856931  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:23.874630  687772 logs.go:282] 0 containers: []
	W1223 00:06:23.874653  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:23.874700  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:23.893425  687772 logs.go:282] 0 containers: []
	W1223 00:06:23.893454  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:23.893521  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:23.912712  687772 logs.go:282] 0 containers: []
	W1223 00:06:23.912734  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:23.912789  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:23.931097  687772 logs.go:282] 0 containers: []
	W1223 00:06:23.931124  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:23.931178  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:23.949113  687772 logs.go:282] 0 containers: []
	W1223 00:06:23.949138  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:23.949152  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:23.949168  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:23.996109  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:23.996137  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:24.016228  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:24.016254  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:24.071647  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:24.064286   19696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:24.064800   19696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:24.066333   19696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:24.066786   19696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:24.068314   19696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:24.064286   19696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:24.064800   19696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:24.066333   19696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:24.066786   19696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:24.068314   19696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:24.071665  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:24.071680  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:24.090918  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:24.090944  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:26.624354  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:26.635840  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:26.654444  687772 logs.go:282] 0 containers: []
	W1223 00:06:26.654473  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:26.654537  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:26.673364  687772 logs.go:282] 0 containers: []
	W1223 00:06:26.673388  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:26.673436  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:26.692467  687772 logs.go:282] 0 containers: []
	W1223 00:06:26.692489  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:26.692539  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:26.711627  687772 logs.go:282] 0 containers: []
	W1223 00:06:26.711656  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:26.711709  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:26.730302  687772 logs.go:282] 0 containers: []
	W1223 00:06:26.730332  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:26.730386  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:26.748910  687772 logs.go:282] 0 containers: []
	W1223 00:06:26.748939  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:26.748995  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:26.768525  687772 logs.go:282] 0 containers: []
	W1223 00:06:26.768548  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:26.768603  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:26.788434  687772 logs.go:282] 0 containers: []
	W1223 00:06:26.788462  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:26.788476  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:26.788491  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:26.845463  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:26.838499   19858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:26.838989   19858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:26.840494   19858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:26.840922   19858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:26.842389   19858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:26.838499   19858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:26.838989   19858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:26.840494   19858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:26.840922   19858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:26.842389   19858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:26.845482  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:26.845494  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:26.864140  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:26.864167  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:26.890448  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:26.890476  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:26.937390  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:26.937422  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:29.457766  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:29.469205  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:29.488353  687772 logs.go:282] 0 containers: []
	W1223 00:06:29.488376  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:29.488431  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:29.508035  687772 logs.go:282] 0 containers: []
	W1223 00:06:29.508059  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:29.508114  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:29.528210  687772 logs.go:282] 0 containers: []
	W1223 00:06:29.528234  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:29.528280  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:29.546344  687772 logs.go:282] 0 containers: []
	W1223 00:06:29.546370  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:29.546432  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:29.565125  687772 logs.go:282] 0 containers: []
	W1223 00:06:29.565153  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:29.565200  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:29.584111  687772 logs.go:282] 0 containers: []
	W1223 00:06:29.584142  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:29.584195  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:29.602714  687772 logs.go:282] 0 containers: []
	W1223 00:06:29.602735  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:29.602778  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:29.621012  687772 logs.go:282] 0 containers: []
	W1223 00:06:29.621042  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:29.621058  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:29.621073  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:29.669132  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:29.669168  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:29.689406  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:29.689431  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:29.746681  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:29.739833   20022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:29.740362   20022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:29.741927   20022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:29.742376   20022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:29.743569   20022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:29.739833   20022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:29.740362   20022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:29.741927   20022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:29.742376   20022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:29.743569   20022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:29.746703  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:29.746720  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:29.765762  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:29.765793  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:32.299443  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:32.310848  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:32.330298  687772 logs.go:282] 0 containers: []
	W1223 00:06:32.330326  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:32.330380  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:32.349664  687772 logs.go:282] 0 containers: []
	W1223 00:06:32.349692  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:32.349745  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:32.367944  687772 logs.go:282] 0 containers: []
	W1223 00:06:32.367969  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:32.368081  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:32.386919  687772 logs.go:282] 0 containers: []
	W1223 00:06:32.386940  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:32.386983  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:32.405416  687772 logs.go:282] 0 containers: []
	W1223 00:06:32.405440  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:32.405487  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:32.423080  687772 logs.go:282] 0 containers: []
	W1223 00:06:32.423100  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:32.423144  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:32.441255  687772 logs.go:282] 0 containers: []
	W1223 00:06:32.441282  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:32.441336  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:32.459763  687772 logs.go:282] 0 containers: []
	W1223 00:06:32.459789  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:32.459801  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:32.459812  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:32.507284  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:32.507314  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:32.529983  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:32.530014  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:32.587816  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:32.580635   20192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:32.581177   20192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:32.582743   20192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:32.583222   20192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:32.584764   20192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:32.580635   20192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:32.581177   20192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:32.582743   20192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:32.583222   20192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:32.584764   20192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:32.587843  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:32.587860  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:32.607796  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:32.607826  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:35.136489  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:35.147976  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:35.166774  687772 logs.go:282] 0 containers: []
	W1223 00:06:35.166794  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:35.166846  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:35.185872  687772 logs.go:282] 0 containers: []
	W1223 00:06:35.185899  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:35.185949  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:35.204053  687772 logs.go:282] 0 containers: []
	W1223 00:06:35.204074  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:35.204115  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:35.223056  687772 logs.go:282] 0 containers: []
	W1223 00:06:35.223077  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:35.223126  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:35.241616  687772 logs.go:282] 0 containers: []
	W1223 00:06:35.241645  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:35.241699  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:35.260422  687772 logs.go:282] 0 containers: []
	W1223 00:06:35.260476  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:35.260536  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:35.279168  687772 logs.go:282] 0 containers: []
	W1223 00:06:35.279192  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:35.279238  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:35.297208  687772 logs.go:282] 0 containers: []
	W1223 00:06:35.297236  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:35.297252  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:35.297267  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:35.317273  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:35.317299  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:35.374319  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:35.365790   20361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:35.367609   20361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:35.368076   20361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:35.369665   20361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:35.370105   20361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:35.365790   20361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:35.367609   20361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:35.368076   20361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:35.369665   20361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:35.370105   20361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:35.374337  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:35.374349  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:35.393025  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:35.393050  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:35.420499  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:35.420537  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:37.968117  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:37.979448  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:37.998789  687772 logs.go:282] 0 containers: []
	W1223 00:06:37.998815  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:37.998861  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:38.019815  687772 logs.go:282] 0 containers: []
	W1223 00:06:38.019847  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:38.019910  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:38.042524  687772 logs.go:282] 0 containers: []
	W1223 00:06:38.042552  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:38.042617  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:38.061464  687772 logs.go:282] 0 containers: []
	W1223 00:06:38.061489  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:38.061544  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:38.080482  687772 logs.go:282] 0 containers: []
	W1223 00:06:38.080509  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:38.080558  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:38.099189  687772 logs.go:282] 0 containers: []
	W1223 00:06:38.099215  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:38.099279  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:38.118161  687772 logs.go:282] 0 containers: []
	W1223 00:06:38.118188  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:38.118244  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:38.136752  687772 logs.go:282] 0 containers: []
	W1223 00:06:38.136786  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:38.136803  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:38.136819  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:38.182751  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:38.182779  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:38.202352  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:38.202375  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:38.257901  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:38.250382   20532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:38.251009   20532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:38.252656   20532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:38.253166   20532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:38.254694   20532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:38.250382   20532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:38.251009   20532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:38.252656   20532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:38.253166   20532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:38.254694   20532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:38.257922  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:38.257933  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:38.276963  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:38.276988  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:40.806792  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:40.818244  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:40.837324  687772 logs.go:282] 0 containers: []
	W1223 00:06:40.837348  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:40.837402  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:40.856364  687772 logs.go:282] 0 containers: []
	W1223 00:06:40.856387  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:40.856453  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:40.874753  687772 logs.go:282] 0 containers: []
	W1223 00:06:40.874780  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:40.874831  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:40.893167  687772 logs.go:282] 0 containers: []
	W1223 00:06:40.893193  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:40.893242  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:40.910901  687772 logs.go:282] 0 containers: []
	W1223 00:06:40.910924  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:40.910976  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:40.930108  687772 logs.go:282] 0 containers: []
	W1223 00:06:40.930133  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:40.930191  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:40.949021  687772 logs.go:282] 0 containers: []
	W1223 00:06:40.949047  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:40.949101  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:40.967221  687772 logs.go:282] 0 containers: []
	W1223 00:06:40.967246  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:40.967260  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:40.967276  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:40.988752  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:40.988779  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:41.048349  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:41.040501   20693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:41.041135   20693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:41.042880   20693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:41.043313   20693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:41.044930   20693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:41.040501   20693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:41.041135   20693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:41.042880   20693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:41.043313   20693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:41.044930   20693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:41.048374  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:41.048387  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:41.067112  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:41.067138  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:41.093421  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:41.093445  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:43.639363  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:43.653263  687772 out.go:203] 
	W1223 00:06:43.654345  687772 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1223 00:06:43.654374  687772 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	* Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1223 00:06:43.654383  687772 out.go:285] * Related issues:
	* Related issues:
	W1223 00:06:43.654397  687772 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	  - https://github.com/kubernetes/minikube/issues/4536
	W1223 00:06:43.654411  687772 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	  - https://github.com/kubernetes/minikube/issues/6014
	I1223 00:06:43.655505  687772 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p newest-cni-348344 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-rc.1": exit status 105
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-348344
helpers_test.go:244: (dbg) docker inspect newest-cni-348344:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "133dc19d84d424ed179e624a54285c88a37ad637a1692732b3536ec0f181551b",
	        "Created": "2025-12-22T23:50:45.124975619Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 687974,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-23T00:00:34.301956639Z",
	            "FinishedAt": "2025-12-23T00:00:32.890201351Z"
	        },
	        "Image": "sha256:9a87e850a5e640dd3e5f71477885272b970ba271e3722be8bebbe0157f704ffd",
	        "ResolvConfPath": "/var/lib/docker/containers/133dc19d84d424ed179e624a54285c88a37ad637a1692732b3536ec0f181551b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/133dc19d84d424ed179e624a54285c88a37ad637a1692732b3536ec0f181551b/hostname",
	        "HostsPath": "/var/lib/docker/containers/133dc19d84d424ed179e624a54285c88a37ad637a1692732b3536ec0f181551b/hosts",
	        "LogPath": "/var/lib/docker/containers/133dc19d84d424ed179e624a54285c88a37ad637a1692732b3536ec0f181551b/133dc19d84d424ed179e624a54285c88a37ad637a1692732b3536ec0f181551b-json.log",
	        "Name": "/newest-cni-348344",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-348344:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-348344",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "133dc19d84d424ed179e624a54285c88a37ad637a1692732b3536ec0f181551b",
	                "LowerDir": "/var/lib/docker/overlay2/6020e8f517a187af8c88e3692b2c53fcf5fcbaeb46fc7b99af192b869c28d41a-init/diff:/var/lib/docker/overlay2/c57dd1a41102d99c4ed6be3c60b871435428bd2cea6a3d8d172f0a67527ba009/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6020e8f517a187af8c88e3692b2c53fcf5fcbaeb46fc7b99af192b869c28d41a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6020e8f517a187af8c88e3692b2c53fcf5fcbaeb46fc7b99af192b869c28d41a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6020e8f517a187af8c88e3692b2c53fcf5fcbaeb46fc7b99af192b869c28d41a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-348344",
	                "Source": "/var/lib/docker/volumes/newest-cni-348344/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-348344",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-348344",
	                "name.minikube.sigs.k8s.io": "newest-cni-348344",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "1d6c9b4cbbb98d27f15b901c20b574a86c3cb628ad2da992c2e0c5437cff03b0",
	            "SandboxKey": "/var/run/docker/netns/1d6c9b4cbbb9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33168"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33169"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33172"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33170"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33171"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-348344": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1020bfe2df349af00e9e2f4197eff27d709a25503c20a26c662019055cba21bb",
	                    "EndpointID": "66b6b308d2bcc6eca28baac06e33fe8d42bbea1f9fe8f1f5ee1a462ebfeba9bc",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "12:46:4e:43:ff:87",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-348344",
	                        "133dc19d84d4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-348344 -n newest-cni-348344
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-348344 -n newest-cni-348344: exit status 2 (297.113434ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-348344 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-348344 logs -n 25: (1.159832666s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────┬────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                      ARGS                                       │    PROFILE     │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────┼────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p kubenet-003676 sudo iptables -t nat -L -n -v                                 │ kubenet-003676 │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo systemctl status kubelet --all --full --no-pager         │ kubenet-003676 │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo systemctl cat kubelet --no-pager                         │ kubenet-003676 │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo journalctl -xeu kubelet --all --full --no-pager          │ kubenet-003676 │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo cat /etc/kubernetes/kubelet.conf                         │ kubenet-003676 │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo cat /var/lib/kubelet/config.yaml                         │ kubenet-003676 │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo systemctl status docker --all --full --no-pager          │ kubenet-003676 │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo systemctl cat docker --no-pager                          │ kubenet-003676 │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo cat /etc/docker/daemon.json                              │ kubenet-003676 │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo docker system info                                       │ kubenet-003676 │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo systemctl status cri-docker --all --full --no-pager      │ kubenet-003676 │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo systemctl cat cri-docker --no-pager                      │ kubenet-003676 │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ kubenet-003676 │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo cat /usr/lib/systemd/system/cri-docker.service           │ kubenet-003676 │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo cri-dockerd --version                                    │ kubenet-003676 │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo systemctl status containerd --all --full --no-pager      │ kubenet-003676 │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo systemctl cat containerd --no-pager                      │ kubenet-003676 │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo cat /lib/systemd/system/containerd.service               │ kubenet-003676 │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo cat /etc/containerd/config.toml                          │ kubenet-003676 │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo containerd config dump                                   │ kubenet-003676 │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo systemctl status crio --all --full --no-pager            │ kubenet-003676 │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │                     │
	│ ssh     │ -p kubenet-003676 sudo systemctl cat crio --no-pager                            │ kubenet-003676 │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;  │ kubenet-003676 │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo crio config                                              │ kubenet-003676 │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ delete  │ -p kubenet-003676                                                               │ kubenet-003676 │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────┴────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/23 00:00:34
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1223 00:00:34.066824  687772 out.go:360] Setting OutFile to fd 1 ...
	I1223 00:00:34.067051  687772 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1223 00:00:34.067058  687772 out.go:374] Setting ErrFile to fd 2...
	I1223 00:00:34.067063  687772 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1223 00:00:34.067257  687772 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-72233/.minikube/bin
	I1223 00:00:34.067701  687772 out.go:368] Setting JSON to false
	I1223 00:00:34.068753  687772 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":13374,"bootTime":1766434660,"procs":281,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1223 00:00:34.068805  687772 start.go:143] virtualization: kvm guest
	W1223 00:00:29.964565  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:31.965119  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:33.965297  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	I1223 00:00:34.070524  687772 out.go:179] * [newest-cni-348344] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1223 00:00:34.072192  687772 notify.go:221] Checking for updates...
	I1223 00:00:34.072201  687772 out.go:179]   - MINIKUBE_LOCATION=22301
	I1223 00:00:34.073912  687772 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1223 00:00:34.074996  687772 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1223 00:00:34.076047  687772 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-72233/.minikube
	I1223 00:00:34.077175  687772 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1223 00:00:34.078295  687772 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1223 00:00:34.079882  687772 config.go:182] Loaded profile config "newest-cni-348344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1223 00:00:34.080446  687772 driver.go:422] Setting default libvirt URI to qemu:///system
	I1223 00:00:34.106101  687772 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1223 00:00:34.106213  687772 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1223 00:00:34.161275  687772 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:64 SystemTime:2025-12-23 00:00:34.151129133 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1223 00:00:34.161373  687772 docker.go:319] overlay module found
	I1223 00:00:34.163775  687772 out.go:179] * Using the docker driver based on existing profile
	I1223 00:00:34.164711  687772 start.go:309] selected driver: docker
	I1223 00:00:34.164723  687772 start.go:928] validating driver "docker" against &{Name:newest-cni-348344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-348344 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1223 00:00:34.164829  687772 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1223 00:00:34.165640  687772 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1223 00:00:34.231050  687772 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:64 SystemTime:2025-12-23 00:00:34.221418272 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1223 00:00:34.231362  687772 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1223 00:00:34.231388  687772 cni.go:84] Creating CNI manager for ""
	I1223 00:00:34.231454  687772 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1223 00:00:34.231489  687772 start.go:353] cluster config:
	{Name:newest-cni-348344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-348344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1223 00:00:34.233188  687772 out.go:179] * Starting "newest-cni-348344" primary control-plane node in "newest-cni-348344" cluster
	I1223 00:00:34.234320  687772 cache.go:134] Beginning downloading kic base image for docker with docker
	I1223 00:00:34.235503  687772 out.go:179] * Pulling base image v0.0.48-1766394456-22288 ...
	I1223 00:00:34.236442  687772 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1223 00:00:34.236471  687772 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4
	I1223 00:00:34.236486  687772 cache.go:65] Caching tarball of preloaded images
	I1223 00:00:34.236540  687772 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local docker daemon
	I1223 00:00:34.236575  687772 preload.go:251] Found /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1223 00:00:34.236586  687772 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on docker
	I1223 00:00:34.236749  687772 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/config.json ...
	I1223 00:00:34.256540  687772 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local docker daemon, skipping pull
	I1223 00:00:34.256557  687772 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 exists in daemon, skipping load
	I1223 00:00:34.256572  687772 cache.go:243] Successfully downloaded all kic artifacts
	I1223 00:00:34.256623  687772 start.go:360] acquireMachinesLock for newest-cni-348344: {Name:mk26cd248e0bcd2d8f2e8a824868ba7de6c9c6f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1223 00:00:34.256695  687772 start.go:364] duration metric: took 39.918µs to acquireMachinesLock for "newest-cni-348344"
	I1223 00:00:34.256714  687772 start.go:96] Skipping create...Using existing machine configuration
	I1223 00:00:34.256719  687772 fix.go:54] fixHost starting: 
	I1223 00:00:34.256918  687772 cli_runner.go:164] Run: docker container inspect newest-cni-348344 --format={{.State.Status}}
	I1223 00:00:34.273955  687772 fix.go:112] recreateIfNeeded on newest-cni-348344: state=Stopped err=<nil>
	W1223 00:00:34.273978  687772 fix.go:138] unexpected machine state, will restart: <nil>
	W1223 00:00:31.998314  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:34.497435  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:36.498048  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:00:34.276035  687772 out.go:252] * Restarting existing docker container for "newest-cni-348344" ...
	I1223 00:00:34.276100  687772 cli_runner.go:164] Run: docker start newest-cni-348344
	I1223 00:00:34.520995  687772 cli_runner.go:164] Run: docker container inspect newest-cni-348344 --format={{.State.Status}}
	I1223 00:00:34.540249  687772 kic.go:430] container "newest-cni-348344" state is running.
	I1223 00:00:34.540736  687772 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-348344
	I1223 00:00:34.560319  687772 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/config.json ...
	I1223 00:00:34.560718  687772 machine.go:94] provisionDockerMachine start ...
	I1223 00:00:34.560825  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:34.581907  687772 main.go:144] libmachine: Using SSH client type: native
	I1223 00:00:34.582194  687772 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33168 <nil> <nil>}
	I1223 00:00:34.582211  687772 main.go:144] libmachine: About to run SSH command:
	hostname
	I1223 00:00:34.583095  687772 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41172->127.0.0.1:33168: read: connection reset by peer
	I1223 00:00:37.726578  687772 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-348344
	
	I1223 00:00:37.726621  687772 ubuntu.go:182] provisioning hostname "newest-cni-348344"
	I1223 00:00:37.726764  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:37.746947  687772 main.go:144] libmachine: Using SSH client type: native
	I1223 00:00:37.747183  687772 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33168 <nil> <nil>}
	I1223 00:00:37.747203  687772 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-348344 && echo "newest-cni-348344" | sudo tee /etc/hostname
	I1223 00:00:37.900818  687772 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-348344
	
	I1223 00:00:37.900900  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:37.919317  687772 main.go:144] libmachine: Using SSH client type: native
	I1223 00:00:37.919561  687772 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33168 <nil> <nil>}
	I1223 00:00:37.919579  687772 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-348344' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-348344/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-348344' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1223 00:00:38.062239  687772 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1223 00:00:38.062284  687772 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22301-72233/.minikube CaCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22301-72233/.minikube}
	I1223 00:00:38.062331  687772 ubuntu.go:190] setting up certificates
	I1223 00:00:38.062344  687772 provision.go:84] configureAuth start
	I1223 00:00:38.062400  687772 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-348344
	I1223 00:00:38.081263  687772 provision.go:143] copyHostCerts
	I1223 00:00:38.081355  687772 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem, removing ...
	I1223 00:00:38.081386  687772 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem
	I1223 00:00:38.081497  687772 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem (1082 bytes)
	I1223 00:00:38.081760  687772 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem, removing ...
	I1223 00:00:38.081785  687772 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem
	I1223 00:00:38.081851  687772 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem (1123 bytes)
	I1223 00:00:38.082007  687772 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem, removing ...
	I1223 00:00:38.082027  687772 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem
	I1223 00:00:38.082101  687772 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem (1679 bytes)
	I1223 00:00:38.082238  687772 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem org=jenkins.newest-cni-348344 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-348344]
	I1223 00:00:38.170695  687772 provision.go:177] copyRemoteCerts
	I1223 00:00:38.170759  687772 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1223 00:00:38.170815  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:38.189123  687772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/newest-cni-348344/id_rsa Username:docker}
	I1223 00:00:38.291920  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1223 00:00:38.309166  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1223 00:00:38.326166  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1223 00:00:38.344564  687772 provision.go:87] duration metric: took 282.199681ms to configureAuth
	I1223 00:00:38.344627  687772 ubuntu.go:206] setting minikube options for container-runtime
	I1223 00:00:38.344911  687772 config.go:182] Loaded profile config "newest-cni-348344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1223 00:00:38.344995  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:38.366260  687772 main.go:144] libmachine: Using SSH client type: native
	I1223 00:00:38.366529  687772 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33168 <nil> <nil>}
	I1223 00:00:38.366545  687772 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1223 00:00:38.510728  687772 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1223 00:00:38.510754  687772 ubuntu.go:71] root file system type: overlay
	I1223 00:00:38.510908  687772 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1223 00:00:38.510979  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:38.532018  687772 main.go:144] libmachine: Using SSH client type: native
	I1223 00:00:38.532329  687772 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33168 <nil> <nil>}
	I1223 00:00:38.532458  687772 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1223 00:00:38.686369  687772 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1223 00:00:38.686443  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:38.704974  687772 main.go:144] libmachine: Using SSH client type: native
	I1223 00:00:38.705187  687772 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33168 <nil> <nil>}
	I1223 00:00:38.705204  687772 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1223 00:00:38.853249  687772 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1223 00:00:38.853285  687772 machine.go:97] duration metric: took 4.292539002s to provisionDockerMachine
	I1223 00:00:38.853303  687772 start.go:293] postStartSetup for "newest-cni-348344" (driver="docker")
	I1223 00:00:38.853321  687772 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1223 00:00:38.853419  687772 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1223 00:00:38.853487  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:38.872421  687772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/newest-cni-348344/id_rsa Username:docker}
	I1223 00:00:38.979494  687772 ssh_runner.go:195] Run: cat /etc/os-release
	I1223 00:00:38.983309  687772 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1223 00:00:38.983340  687772 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1223 00:00:38.983352  687772 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-72233/.minikube/addons for local assets ...
	I1223 00:00:38.983401  687772 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-72233/.minikube/files for local assets ...
	I1223 00:00:38.983478  687772 filesync.go:149] local asset: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem -> 758032.pem in /etc/ssl/certs
	I1223 00:00:38.983566  687772 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1223 00:00:38.991328  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem --> /etc/ssl/certs/758032.pem (1708 bytes)
	I1223 00:00:39.009038  687772 start.go:296] duration metric: took 155.718049ms for postStartSetup
	I1223 00:00:39.009116  687772 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1223 00:00:39.009156  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:39.028095  687772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/newest-cni-348344/id_rsa Username:docker}
	W1223 00:00:36.464751  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:38.465798  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	I1223 00:00:39.126878  687772 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1223 00:00:39.131527  687772 fix.go:56] duration metric: took 4.874800463s for fixHost
	I1223 00:00:39.131555  687772 start.go:83] releasing machines lock for "newest-cni-348344", held for 4.874847678s
	I1223 00:00:39.131653  687772 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-348344
	I1223 00:00:39.149572  687772 ssh_runner.go:195] Run: cat /version.json
	I1223 00:00:39.149644  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:39.149684  687772 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1223 00:00:39.149791  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:39.169897  687772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/newest-cni-348344/id_rsa Username:docker}
	I1223 00:00:39.170262  687772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/newest-cni-348344/id_rsa Username:docker}
	I1223 00:00:39.329086  687772 ssh_runner.go:195] Run: systemctl --version
	I1223 00:00:39.336405  687772 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1223 00:00:39.341291  687772 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1223 00:00:39.341348  687772 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1223 00:00:39.349279  687772 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1223 00:00:39.349306  687772 start.go:496] detecting cgroup driver to use...
	I1223 00:00:39.349351  687772 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1223 00:00:39.349509  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1223 00:00:39.363512  687772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1223 00:00:39.372815  687772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1223 00:00:39.381448  687772 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1223 00:00:39.381508  687772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1223 00:00:39.390109  687772 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1223 00:00:39.399162  687772 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1223 00:00:39.408060  687772 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1223 00:00:39.416584  687772 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1223 00:00:39.424711  687772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1223 00:00:39.433432  687772 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1223 00:00:39.442056  687772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1223 00:00:39.450905  687772 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1223 00:00:39.458512  687772 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1223 00:00:39.466991  687772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1223 00:00:39.550442  687772 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1223 00:00:39.632902  687772 start.go:496] detecting cgroup driver to use...
	I1223 00:00:39.632954  687772 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1223 00:00:39.633002  687772 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1223 00:00:39.646974  687772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1223 00:00:39.659240  687772 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1223 00:00:39.675979  687772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1223 00:00:39.688406  687772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1223 00:00:39.700912  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1223 00:00:39.715235  687772 ssh_runner.go:195] Run: which cri-dockerd
	I1223 00:00:39.718945  687772 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1223 00:00:39.726972  687772 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1223 00:00:39.739559  687772 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1223 00:00:39.825824  687772 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1223 00:00:39.909620  687772 docker.go:578] configuring docker to use "cgroupfs" as cgroup driver...
	I1223 00:00:39.909743  687772 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1223 00:00:39.923453  687772 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1223 00:00:39.935401  687772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1223 00:00:40.031388  687772 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1223 00:00:40.779801  687772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1223 00:00:40.794700  687772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1223 00:00:40.807727  687772 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1223 00:00:40.821730  687772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1223 00:00:40.835038  687772 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1223 00:00:40.917554  687772 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1223 00:00:41.006720  687772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1223 00:00:41.090943  687772 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1223 00:00:41.119047  687772 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1223 00:00:41.131277  687772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1223 00:00:41.215437  687772 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1223 00:00:41.283622  687772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1223 00:00:41.298485  687772 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1223 00:00:41.298551  687772 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1223 00:00:41.302554  687772 start.go:564] Will wait 60s for crictl version
	I1223 00:00:41.302645  687772 ssh_runner.go:195] Run: which crictl
	I1223 00:00:41.306307  687772 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1223 00:00:41.332425  687772 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1223 00:00:41.332495  687772 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1223 00:00:41.358777  687772 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1223 00:00:41.385668  687772 out.go:252] * Preparing Kubernetes v1.35.0-rc.1 on Docker 29.1.3 ...
	I1223 00:00:41.385749  687772 cli_runner.go:164] Run: docker network inspect newest-cni-348344 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1223 00:00:41.403696  687772 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1223 00:00:41.407961  687772 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1223 00:00:41.419714  687772 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1223 00:00:38.997749  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:40.998211  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:00:41.420647  687772 kubeadm.go:884] updating cluster {Name:newest-cni-348344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-348344 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1223 00:00:41.420848  687772 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1223 00:00:41.420937  687772 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1223 00:00:41.444049  687772 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1223 00:00:41.444075  687772 docker.go:624] Images already preloaded, skipping extraction
	I1223 00:00:41.444128  687772 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1223 00:00:41.466181  687772 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1223 00:00:41.466206  687772 cache_images.go:86] Images are preloaded, skipping loading
	I1223 00:00:41.466215  687772 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0-rc.1 docker true true} ...
	I1223 00:00:41.466312  687772 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-348344 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-348344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1223 00:00:41.466372  687772 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1223 00:00:41.520065  687772 cni.go:84] Creating CNI manager for ""
	I1223 00:00:41.520097  687772 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1223 00:00:41.520116  687772 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1223 00:00:41.520151  687772 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-348344 NodeName:newest-cni-348344 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1223 00:00:41.520280  687772 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-348344"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1223 00:00:41.520348  687772 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1223 00:00:41.529909  687772 binaries.go:51] Found k8s binaries, skipping transfer
	I1223 00:00:41.529987  687772 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1223 00:00:41.538670  687772 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1223 00:00:41.552566  687772 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1223 00:00:41.565766  687772 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1223 00:00:41.578604  687772 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1223 00:00:41.582388  687772 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1223 00:00:41.592239  687772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1223 00:00:41.689716  687772 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1223 00:00:41.716265  687772 certs.go:69] Setting up /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344 for IP: 192.168.94.2
	I1223 00:00:41.716293  687772 certs.go:195] generating shared ca certs ...
	I1223 00:00:41.716315  687772 certs.go:227] acquiring lock for ca certs: {Name:mk952cc8302daab7c0050aedd5db4002f6808128 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1223 00:00:41.716492  687772 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.key
	I1223 00:00:41.716548  687772 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.key
	I1223 00:00:41.716557  687772 certs.go:257] generating profile certs ...
	I1223 00:00:41.716731  687772 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/client.key
	I1223 00:00:41.716814  687772 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/apiserver.key.3654ac73
	I1223 00:00:41.716864  687772 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/proxy-client.key
	I1223 00:00:41.716992  687772 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803.pem (1338 bytes)
	W1223 00:00:41.717032  687772 certs.go:480] ignoring /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803_empty.pem, impossibly tiny 0 bytes
	I1223 00:00:41.717041  687772 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem (1675 bytes)
	I1223 00:00:41.717076  687772 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem (1082 bytes)
	I1223 00:00:41.717110  687772 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem (1123 bytes)
	I1223 00:00:41.717142  687772 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem (1679 bytes)
	I1223 00:00:41.717206  687772 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem (1708 bytes)
	I1223 00:00:41.718210  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1223 00:00:41.739858  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1223 00:00:41.759304  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1223 00:00:41.777433  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1223 00:00:41.794878  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1223 00:00:41.811695  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1223 00:00:41.829786  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1223 00:00:41.846666  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1223 00:00:41.863546  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem --> /usr/share/ca-certificates/758032.pem (1708 bytes)
	I1223 00:00:41.880762  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1223 00:00:41.898275  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803.pem --> /usr/share/ca-certificates/75803.pem (1338 bytes)
	I1223 00:00:41.915259  687772 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1223 00:00:41.927541  687772 ssh_runner.go:195] Run: openssl version
	I1223 00:00:41.933577  687772 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1223 00:00:41.940758  687772 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1223 00:00:41.948346  687772 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1223 00:00:41.952096  687772 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 22:33 /usr/share/ca-certificates/minikubeCA.pem
	I1223 00:00:41.952140  687772 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1223 00:00:41.987400  687772 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1223 00:00:41.995158  687772 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/75803.pem
	I1223 00:00:42.002657  687772 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/75803.pem /etc/ssl/certs/75803.pem
	I1223 00:00:42.010346  687772 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75803.pem
	I1223 00:00:42.014050  687772 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 22:42 /usr/share/ca-certificates/75803.pem
	I1223 00:00:42.014097  687772 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75803.pem
	I1223 00:00:42.048067  687772 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1223 00:00:42.055784  687772 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/758032.pem
	I1223 00:00:42.063393  687772 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/758032.pem /etc/ssl/certs/758032.pem
	I1223 00:00:42.071081  687772 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/758032.pem
	I1223 00:00:42.075446  687772 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 22:42 /usr/share/ca-certificates/758032.pem
	I1223 00:00:42.075515  687772 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/758032.pem
	I1223 00:00:42.110438  687772 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1223 00:00:42.118365  687772 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1223 00:00:42.122225  687772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1223 00:00:42.157294  687772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1223 00:00:42.192810  687772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1223 00:00:42.226497  687772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1223 00:00:42.261017  687772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1223 00:00:42.299910  687772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1223 00:00:42.335335  687772 kubeadm.go:401] StartCluster: {Name:newest-cni-348344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-348344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1223 00:00:42.335492  687772 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1223 00:00:42.355576  687772 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1223 00:00:42.364792  687772 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1223 00:00:42.364813  687772 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1223 00:00:42.364868  687772 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1223 00:00:42.373141  687772 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1223 00:00:42.374088  687772 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-348344" does not appear in /home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1223 00:00:42.374674  687772 kubeconfig.go:62] /home/jenkins/minikube-integration/22301-72233/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-348344" cluster setting kubeconfig missing "newest-cni-348344" context setting]
	I1223 00:00:42.375665  687772 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/kubeconfig: {Name:mkabb5ea92c3fe748f610038efb5c58128364c71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1223 00:00:42.377667  687772 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1223 00:00:42.385784  687772 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1223 00:00:42.385817  687772 kubeadm.go:602] duration metric: took 20.99658ms to restartPrimaryControlPlane
	I1223 00:00:42.385829  687772 kubeadm.go:403] duration metric: took 50.508354ms to StartCluster
	I1223 00:00:42.385848  687772 settings.go:142] acquiring lock: {Name:mk05aa406dacdbba79fec0b7e7f355491ea46bf8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1223 00:00:42.385918  687772 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1223 00:00:42.387577  687772 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/kubeconfig: {Name:mkabb5ea92c3fe748f610038efb5c58128364c71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1223 00:00:42.387909  687772 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1223 00:00:42.388038  687772 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1223 00:00:42.388131  687772 config.go:182] Loaded profile config "newest-cni-348344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1223 00:00:42.388136  687772 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-348344"
	I1223 00:00:42.388154  687772 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-348344"
	I1223 00:00:42.388170  687772 addons.go:70] Setting dashboard=true in profile "newest-cni-348344"
	I1223 00:00:42.388189  687772 addons.go:70] Setting default-storageclass=true in profile "newest-cni-348344"
	I1223 00:00:42.388208  687772 addons.go:239] Setting addon dashboard=true in "newest-cni-348344"
	W1223 00:00:42.388222  687772 addons.go:248] addon dashboard should already be in state true
	I1223 00:00:42.388226  687772 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-348344"
	I1223 00:00:42.388262  687772 host.go:66] Checking if "newest-cni-348344" exists ...
	I1223 00:00:42.388184  687772 host.go:66] Checking if "newest-cni-348344" exists ...
	I1223 00:00:42.388611  687772 cli_runner.go:164] Run: docker container inspect newest-cni-348344 --format={{.State.Status}}
	I1223 00:00:42.388764  687772 cli_runner.go:164] Run: docker container inspect newest-cni-348344 --format={{.State.Status}}
	I1223 00:00:42.388771  687772 cli_runner.go:164] Run: docker container inspect newest-cni-348344 --format={{.State.Status}}
	I1223 00:00:42.390679  687772 out.go:179] * Verifying Kubernetes components...
	I1223 00:00:42.391709  687772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1223 00:00:42.411960  687772 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1223 00:00:42.411964  687772 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1223 00:00:42.412088  687772 addons.go:239] Setting addon default-storageclass=true in "newest-cni-348344"
	I1223 00:00:42.412122  687772 host.go:66] Checking if "newest-cni-348344" exists ...
	I1223 00:00:42.412456  687772 cli_runner.go:164] Run: docker container inspect newest-cni-348344 --format={{.State.Status}}
	I1223 00:00:42.413023  687772 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1223 00:00:42.413043  687772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1223 00:00:42.413094  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:42.416720  687772 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1223 00:00:42.418014  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1223 00:00:42.418038  687772 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1223 00:00:42.418101  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:42.435878  687772 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1223 00:00:42.435898  687772 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1223 00:00:42.435951  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:42.437317  687772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/newest-cni-348344/id_rsa Username:docker}
	I1223 00:00:42.440138  687772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/newest-cni-348344/id_rsa Username:docker}
	I1223 00:00:42.468807  687772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/newest-cni-348344/id_rsa Username:docker}
	I1223 00:00:42.547260  687772 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1223 00:00:42.600477  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1223 00:00:42.600501  687772 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1223 00:00:42.601363  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1223 00:00:42.607651  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1223 00:00:42.614184  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1223 00:00:42.614204  687772 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1223 00:00:42.628125  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1223 00:00:42.628154  687772 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1223 00:00:42.643093  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1223 00:00:42.643121  687772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1223 00:00:42.702024  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1223 00:00:42.702052  687772 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1223 00:00:42.716211  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1223 00:00:42.716238  687772 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1223 00:00:42.729547  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1223 00:00:42.729569  687772 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1223 00:00:42.742218  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1223 00:00:42.742246  687772 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1223 00:00:42.754716  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1223 00:00:42.754741  687772 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1223 00:00:42.767403  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1223 00:00:43.251127  687772 api_server.go:52] waiting for apiserver process to appear ...
	W1223 00:00:43.251190  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1223 00:00:43.251231  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:43.251243  687772 retry.go:84] will retry after 100ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:43.251206  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:00:43.251536  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:43.392258  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:00:43.445582  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:43.478824  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1223 00:00:43.509277  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:00:43.537617  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1223 00:00:43.565023  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:43.661224  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:00:43.715310  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:43.751478  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:44.004029  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:00:44.062136  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1223 00:00:40.965708  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:43.465160  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:43.498161  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:45.997960  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:00:44.088452  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:00:44.143262  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:44.251335  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:44.383985  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1223 00:00:44.396693  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:00:44.439768  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1223 00:00:44.455552  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:44.752205  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:44.902917  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:00:44.962468  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:45.251805  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:45.326701  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:00:45.400433  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:45.424647  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:00:45.480956  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:45.751310  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:45.799387  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:00:45.854148  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:46.251658  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:46.752208  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:46.836006  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:00:46.890186  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:46.995326  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:00:47.057423  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:47.251702  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:47.426474  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:00:47.485952  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:47.752131  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:48.207567  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1223 00:00:48.251315  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:00:48.262862  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:48.519836  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:00:48.578806  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:48.752112  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:00:45.465342  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:47.465739  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:47.998170  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:50.498122  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:00:49.251876  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:49.751844  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:49.890160  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:00:49.947020  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:50.066047  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:00:50.120129  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:50.251336  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:50.558241  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:00:50.615271  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:50.751572  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:51.252351  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:51.751913  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:52.251786  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:52.752327  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:52.791985  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:00:52.850108  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:53.251446  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:53.751646  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:00:49.964544  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:51.964692  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:53.964842  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:52.997659  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:54.998340  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:00:54.252244  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:54.751444  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:55.251848  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:55.347206  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:00:55.401615  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:55.751830  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:55.936027  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:00:55.995088  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:56.251432  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:56.751824  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:57.111686  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:00:57.169648  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:57.251735  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:57.751676  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:58.251279  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:58.751377  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:00:55.965091  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:58.465307  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	I1223 00:00:59.465067  679852 pod_ready.go:94] pod "coredns-66bc5c9577-v4sr7" is "Ready"
	I1223 00:00:59.465093  679852 pod_ready.go:86] duration metric: took 31.505726579s for pod "coredns-66bc5c9577-v4sr7" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:00:59.467499  679852 pod_ready.go:83] waiting for pod "etcd-kubenet-003676" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:00:59.471040  679852 pod_ready.go:94] pod "etcd-kubenet-003676" is "Ready"
	I1223 00:00:59.471063  679852 pod_ready.go:86] duration metric: took 3.544638ms for pod "etcd-kubenet-003676" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:00:59.472907  679852 pod_ready.go:83] waiting for pod "kube-apiserver-kubenet-003676" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:00:59.476385  679852 pod_ready.go:94] pod "kube-apiserver-kubenet-003676" is "Ready"
	I1223 00:00:59.476406  679852 pod_ready.go:86] duration metric: took 3.481083ms for pod "kube-apiserver-kubenet-003676" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:00:59.478385  679852 pod_ready.go:83] waiting for pod "kube-controller-manager-kubenet-003676" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:00:59.663149  679852 pod_ready.go:94] pod "kube-controller-manager-kubenet-003676" is "Ready"
	I1223 00:00:59.663178  679852 pod_ready.go:86] duration metric: took 184.769862ms for pod "kube-controller-manager-kubenet-003676" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:00:59.863586  679852 pod_ready.go:83] waiting for pod "kube-proxy-4ftjm" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:01:00.263634  679852 pod_ready.go:94] pod "kube-proxy-4ftjm" is "Ready"
	I1223 00:01:00.263661  679852 pod_ready.go:86] duration metric: took 400.030267ms for pod "kube-proxy-4ftjm" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:01:00.464316  679852 pod_ready.go:83] waiting for pod "kube-scheduler-kubenet-003676" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:01:00.863672  679852 pod_ready.go:94] pod "kube-scheduler-kubenet-003676" is "Ready"
	I1223 00:01:00.863704  679852 pod_ready.go:86] duration metric: took 399.359894ms for pod "kube-scheduler-kubenet-003676" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:01:00.863716  679852 pod_ready.go:40] duration metric: took 32.907880274s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1223 00:01:00.909769  679852 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1223 00:01:00.911549  679852 out.go:179] * Done! kubectl is now configured to use "kubenet-003676" cluster and "default" namespace by default
	W1223 00:00:57.497653  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:59.497943  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:00:59.251660  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:59.751533  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:00.252079  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:00.347519  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:01:00.405959  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:00.406017  687772 retry.go:84] will retry after 5.4s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:00.564176  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:01:00.620480  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:00.751684  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:01.251318  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:01.751824  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:02.252159  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:02.751813  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:03.251679  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:03.752247  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:01:01.997413  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:03.998103  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:06.498109  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:04.252308  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:04.751451  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:05.252117  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:05.751808  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:05.759328  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:01:05.824086  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:05.824133  687772 retry.go:84] will retry after 18.8s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:06.105622  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:01:06.162303  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:06.251478  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:06.751336  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:07.251717  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:07.751827  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:08.251532  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:08.751779  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:01:08.998157  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:11.498214  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:09.251540  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:09.503324  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:01:09.562697  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:09.562742  687772 retry.go:84] will retry after 15.6s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:09.751986  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:10.251441  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:10.751356  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:11.251389  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:11.752279  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:12.251802  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:12.751324  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:13.251818  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:13.751866  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:01:13.998099  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:16.497424  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:14.252139  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:14.751583  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:15.252310  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:15.751625  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:16.251842  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:16.751830  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:17.252084  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:17.751783  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:18.251376  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:18.751964  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:01:18.498157  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:20.998023  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:19.252110  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:19.751822  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:20.251704  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:20.568697  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:01:20.639449  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:20.639500  687772 retry.go:84] will retry after 12s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:20.751734  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:21.252249  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:21.751801  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:22.251412  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:22.751366  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:23.252197  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:23.751273  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:01:23.498226  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:25.998233  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:24.252133  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:24.590346  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:01:24.662633  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:24.662674  687772 retry.go:84] will retry after 26.5s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:24.751773  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:25.211650  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1223 00:01:25.252110  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:01:25.284581  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:25.752154  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:26.251728  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:26.751953  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:27.251838  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:27.751740  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:28.251778  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:28.752182  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:01:28.498149  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:30.998068  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:29.252321  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:29.752290  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:30.251523  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:30.752205  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:31.251957  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:31.751675  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:32.251501  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:32.610650  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:01:32.668272  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:32.668321  687772 retry.go:84] will retry after 25.8s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:32.751294  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:33.251566  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:33.751514  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:01:33.497906  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:35.997708  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:34.251717  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:34.751347  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:35.251743  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:35.751789  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:36.251397  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:36.751367  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:37.251392  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:37.751873  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:38.251848  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:38.751798  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:01:38.498282  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:40.998196  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:39.251316  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:39.752288  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:40.251339  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:40.751372  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:41.252071  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:41.752308  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:42.252071  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:42.752021  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:01:42.776761  687772 logs.go:282] 0 containers: []
	W1223 00:01:42.776791  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:01:42.776839  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:01:42.797079  687772 logs.go:282] 0 containers: []
	W1223 00:01:42.797110  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:01:42.797168  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:01:42.817812  687772 logs.go:282] 0 containers: []
	W1223 00:01:42.817839  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:01:42.817907  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:01:42.837835  687772 logs.go:282] 0 containers: []
	W1223 00:01:42.837868  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:01:42.837924  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:01:42.858165  687772 logs.go:282] 0 containers: []
	W1223 00:01:42.858188  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:01:42.858231  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:01:42.878211  687772 logs.go:282] 0 containers: []
	W1223 00:01:42.878236  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:01:42.878281  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:01:42.897739  687772 logs.go:282] 0 containers: []
	W1223 00:01:42.897762  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:01:42.897817  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:01:42.918314  687772 logs.go:282] 0 containers: []
	W1223 00:01:42.918340  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:01:42.918352  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:01:42.918362  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:01:42.965734  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:01:42.965766  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:01:42.986555  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:01:42.986585  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:01:43.052069  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:01:43.042198    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:43.042815    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:43.044868    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:43.045810    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:43.047451    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:01:43.042198    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:43.042815    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:43.044868    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:43.045810    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:43.047451    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:01:43.052092  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:01:43.052108  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:01:43.072511  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:01:43.072542  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1223 00:01:43.497482  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:45.497538  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:45.620354  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:45.631856  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:01:45.651448  687772 logs.go:282] 0 containers: []
	W1223 00:01:45.651473  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:01:45.651528  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:01:45.671204  687772 logs.go:282] 0 containers: []
	W1223 00:01:45.671229  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:01:45.671279  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:01:45.690397  687772 logs.go:282] 0 containers: []
	W1223 00:01:45.690418  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:01:45.690470  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:01:45.711316  687772 logs.go:282] 0 containers: []
	W1223 00:01:45.711342  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:01:45.711400  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:01:45.731731  687772 logs.go:282] 0 containers: []
	W1223 00:01:45.731755  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:01:45.731812  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:01:45.751415  687772 logs.go:282] 0 containers: []
	W1223 00:01:45.751442  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:01:45.751500  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:01:45.770135  687772 logs.go:282] 0 containers: []
	W1223 00:01:45.770157  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:01:45.770202  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:01:45.789088  687772 logs.go:282] 0 containers: []
	W1223 00:01:45.789114  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:01:45.789127  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:01:45.789139  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:01:45.819714  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:01:45.819741  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:01:45.867983  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:01:45.868013  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:01:45.888018  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:01:45.888051  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:01:45.945247  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:01:45.937257    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:45.937924    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:45.939479    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:45.940165    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:45.941893    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:01:45.937257    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:45.937924    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:45.939479    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:45.940165    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:45.941893    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:01:45.945270  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:01:45.945286  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:01:47.390289  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:01:47.445750  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:47.445788  687772 retry.go:84] will retry after 18.6s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:48.464795  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:48.476515  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:01:48.499002  687772 logs.go:282] 0 containers: []
	W1223 00:01:48.499028  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:01:48.499082  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:01:48.523130  687772 logs.go:282] 0 containers: []
	W1223 00:01:48.523165  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:01:48.523225  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:01:48.547882  687772 logs.go:282] 0 containers: []
	W1223 00:01:48.547904  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:01:48.547950  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:01:48.567534  687772 logs.go:282] 0 containers: []
	W1223 00:01:48.567560  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:01:48.567627  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:01:48.590197  687772 logs.go:282] 0 containers: []
	W1223 00:01:48.590222  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:01:48.590280  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:01:48.609279  687772 logs.go:282] 0 containers: []
	W1223 00:01:48.609306  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:01:48.609357  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:01:48.628700  687772 logs.go:282] 0 containers: []
	W1223 00:01:48.628731  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:01:48.628795  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:01:48.648378  687772 logs.go:282] 0 containers: []
	W1223 00:01:48.648402  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:01:48.648416  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:01:48.648433  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:01:48.672700  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:01:48.672741  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:01:48.743864  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:01:48.735937    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:48.736663    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:48.737624    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:48.739145    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:48.739683    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:01:48.735937    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:48.736663    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:48.737624    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:48.739145    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:48.739683    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:01:48.743885  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:01:48.743897  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:01:48.762911  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:01:48.762941  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:01:48.793205  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:01:48.793235  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1223 00:01:47.498181  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:49.998144  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:51.128346  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:01:51.182480  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:51.182522  687772 retry.go:84] will retry after 29.5s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:51.341781  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:51.353222  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:01:51.373421  687772 logs.go:282] 0 containers: []
	W1223 00:01:51.373447  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:01:51.373497  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:01:51.392557  687772 logs.go:282] 0 containers: []
	W1223 00:01:51.392613  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:01:51.392675  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:01:51.411939  687772 logs.go:282] 0 containers: []
	W1223 00:01:51.411964  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:01:51.412018  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:01:51.430968  687772 logs.go:282] 0 containers: []
	W1223 00:01:51.430998  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:01:51.431054  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:01:51.451234  687772 logs.go:282] 0 containers: []
	W1223 00:01:51.451266  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:01:51.451329  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:01:51.470904  687772 logs.go:282] 0 containers: []
	W1223 00:01:51.470947  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:01:51.471017  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:01:51.490121  687772 logs.go:282] 0 containers: []
	W1223 00:01:51.490146  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:01:51.490201  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:01:51.511843  687772 logs.go:282] 0 containers: []
	W1223 00:01:51.511875  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:01:51.511891  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:01:51.511906  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:01:51.545106  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:01:51.545138  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:01:51.594541  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:01:51.594576  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:01:51.615455  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:01:51.615485  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:01:51.680061  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:01:51.672760    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:51.673328    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:51.674906    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:51.675511    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:51.677002    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:01:51.672760    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:51.673328    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:51.674906    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:51.675511    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:51.677002    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:01:51.680080  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:01:51.680091  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1223 00:01:52.497841  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:54.498062  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:56.498119  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:54.199432  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:54.211004  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:01:54.230469  687772 logs.go:282] 0 containers: []
	W1223 00:01:54.230498  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:01:54.230548  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:01:54.251212  687772 logs.go:282] 0 containers: []
	W1223 00:01:54.251245  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:01:54.251300  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:01:54.274147  687772 logs.go:282] 0 containers: []
	W1223 00:01:54.274177  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:01:54.274238  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:01:54.297381  687772 logs.go:282] 0 containers: []
	W1223 00:01:54.297413  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:01:54.297471  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:01:54.316290  687772 logs.go:282] 0 containers: []
	W1223 00:01:54.316315  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:01:54.316362  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:01:54.335315  687772 logs.go:282] 0 containers: []
	W1223 00:01:54.335339  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:01:54.335393  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:01:54.354058  687772 logs.go:282] 0 containers: []
	W1223 00:01:54.354089  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:01:54.354144  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:01:54.372661  687772 logs.go:282] 0 containers: []
	W1223 00:01:54.372686  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:01:54.372700  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:01:54.372715  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:01:54.417565  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:01:54.417601  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:01:54.438013  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:01:54.438041  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:01:54.497552  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:01:54.488781    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:54.489417    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:54.491051    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:54.491532    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:54.493218    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:01:54.488781    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:54.489417    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:54.491051    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:54.491532    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:54.493218    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:01:54.497575  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:01:54.497589  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:01:54.517495  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:01:54.517523  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:01:57.056037  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:57.067478  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:01:57.087084  687772 logs.go:282] 0 containers: []
	W1223 00:01:57.087114  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:01:57.087183  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:01:57.105193  687772 logs.go:282] 0 containers: []
	W1223 00:01:57.105218  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:01:57.105270  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:01:57.122859  687772 logs.go:282] 0 containers: []
	W1223 00:01:57.122885  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:01:57.122931  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:01:57.141996  687772 logs.go:282] 0 containers: []
	W1223 00:01:57.142021  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:01:57.142074  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:01:57.160005  687772 logs.go:282] 0 containers: []
	W1223 00:01:57.160032  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:01:57.160083  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:01:57.178889  687772 logs.go:282] 0 containers: []
	W1223 00:01:57.178915  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:01:57.178989  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:01:57.196419  687772 logs.go:282] 0 containers: []
	W1223 00:01:57.196446  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:01:57.196498  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:01:57.214764  687772 logs.go:282] 0 containers: []
	W1223 00:01:57.214790  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:01:57.214804  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:01:57.214817  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:01:57.266333  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:01:57.266370  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:01:57.289301  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:01:57.289330  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:01:57.347060  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:01:57.339792    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:57.340363    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:57.341983    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:57.342452    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:57.344014    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:01:57.339792    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:57.340363    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:57.341983    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:57.342452    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:57.344014    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:01:57.347099  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:01:57.347117  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:01:57.370222  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:01:57.370257  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:01:58.466074  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:01:58.519779  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:58.519828  687772 retry.go:84] will retry after 42.4s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1223 00:01:58.498177  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:02:00.998033  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:59.898063  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:59.909410  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:01:59.927950  687772 logs.go:282] 0 containers: []
	W1223 00:01:59.927974  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:01:59.928017  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:01:59.946773  687772 logs.go:282] 0 containers: []
	W1223 00:01:59.946800  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:01:59.946861  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:01:59.964419  687772 logs.go:282] 0 containers: []
	W1223 00:01:59.964443  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:01:59.964500  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:01:59.982454  687772 logs.go:282] 0 containers: []
	W1223 00:01:59.982478  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:01:59.982537  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:00.000838  687772 logs.go:282] 0 containers: []
	W1223 00:02:00.000860  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:00.000929  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:00.018673  687772 logs.go:282] 0 containers: []
	W1223 00:02:00.018696  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:00.018747  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:00.035944  687772 logs.go:282] 0 containers: []
	W1223 00:02:00.035973  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:00.036027  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:00.053640  687772 logs.go:282] 0 containers: []
	W1223 00:02:00.053666  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:00.053679  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:00.053700  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:00.098844  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:00.098870  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:00.120198  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:00.120224  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:00.175459  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:00.168568    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:00.169147    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:00.170692    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:00.171078    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:00.172563    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:00.168568    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:00.169147    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:00.170692    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:00.171078    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:00.172563    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:00.175477  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:00.175490  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:00.194106  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:00.194146  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:02.722361  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:02.733721  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:02.753939  687772 logs.go:282] 0 containers: []
	W1223 00:02:02.753963  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:02.754025  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:02.773570  687772 logs.go:282] 0 containers: []
	W1223 00:02:02.773610  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:02.773665  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:02.793427  687772 logs.go:282] 0 containers: []
	W1223 00:02:02.793451  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:02.793514  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:02.812154  687772 logs.go:282] 0 containers: []
	W1223 00:02:02.812183  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:02.812241  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:02.830757  687772 logs.go:282] 0 containers: []
	W1223 00:02:02.830777  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:02.830819  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:02.849140  687772 logs.go:282] 0 containers: []
	W1223 00:02:02.849163  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:02.849206  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:02.867505  687772 logs.go:282] 0 containers: []
	W1223 00:02:02.867529  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:02.867584  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:02.885892  687772 logs.go:282] 0 containers: []
	W1223 00:02:02.885920  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:02.885935  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:02.885950  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:02.933880  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:02.933906  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:02.955273  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:02.955304  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:03.009924  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:03.002806    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:03.003364    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:03.004891    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:03.005360    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:03.006852    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:03.002806    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:03.003364    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:03.004891    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:03.005360    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:03.006852    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:03.009951  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:03.010012  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:03.028798  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:03.028821  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1223 00:02:03.497953  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:02:05.997506  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:02:05.557718  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:05.569769  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:05.588873  687772 logs.go:282] 0 containers: []
	W1223 00:02:05.588899  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:05.588946  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:05.607265  687772 logs.go:282] 0 containers: []
	W1223 00:02:05.607289  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:05.607342  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:05.625761  687772 logs.go:282] 0 containers: []
	W1223 00:02:05.625790  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:05.625860  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:05.643443  687772 logs.go:282] 0 containers: []
	W1223 00:02:05.643464  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:05.643513  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:05.661241  687772 logs.go:282] 0 containers: []
	W1223 00:02:05.661266  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:05.661314  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:05.679744  687772 logs.go:282] 0 containers: []
	W1223 00:02:05.679764  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:05.679805  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:05.697808  687772 logs.go:282] 0 containers: []
	W1223 00:02:05.697831  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:05.697878  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:05.716222  687772 logs.go:282] 0 containers: []
	W1223 00:02:05.716245  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:05.716255  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:05.716269  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:05.772404  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:05.772437  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:05.793610  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:05.793643  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:05.849453  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:05.842499    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:05.842938    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:05.844491    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:05.844916    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:05.846469    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:05.842499    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:05.842938    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:05.844491    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:05.844916    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:05.846469    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:05.849479  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:05.849493  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:05.868250  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:05.868273  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:05.997019  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:02:06.048845  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1223 00:02:06.048961  687772 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1223 00:02:08.398283  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:08.409794  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:08.428854  687772 logs.go:282] 0 containers: []
	W1223 00:02:08.428878  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:08.428927  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:08.447223  687772 logs.go:282] 0 containers: []
	W1223 00:02:08.447248  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:08.447292  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:08.464797  687772 logs.go:282] 0 containers: []
	W1223 00:02:08.464816  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:08.464857  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:08.483364  687772 logs.go:282] 0 containers: []
	W1223 00:02:08.483386  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:08.483449  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:08.501985  687772 logs.go:282] 0 containers: []
	W1223 00:02:08.502014  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:08.502064  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:08.520995  687772 logs.go:282] 0 containers: []
	W1223 00:02:08.521020  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:08.521071  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:08.542089  687772 logs.go:282] 0 containers: []
	W1223 00:02:08.542109  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:08.542149  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:08.560448  687772 logs.go:282] 0 containers: []
	W1223 00:02:08.560468  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:08.560478  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:08.560489  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:08.606044  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:08.606073  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:08.626314  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:08.626339  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:08.681455  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:08.674268    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:08.674839    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:08.676419    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:08.676835    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:08.678358    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:08.674268    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:08.674839    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:08.676419    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:08.676835    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:08.678358    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:08.681477  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:08.681490  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:08.699804  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:08.699830  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1223 00:02:07.998312  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:02:10.498076  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:02:11.230050  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:11.241320  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:11.260234  687772 logs.go:282] 0 containers: []
	W1223 00:02:11.260256  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:11.260301  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:11.279535  687772 logs.go:282] 0 containers: []
	W1223 00:02:11.279558  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:11.279635  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:11.297775  687772 logs.go:282] 0 containers: []
	W1223 00:02:11.297799  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:11.297843  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:11.315756  687772 logs.go:282] 0 containers: []
	W1223 00:02:11.315780  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:11.315823  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:11.333828  687772 logs.go:282] 0 containers: []
	W1223 00:02:11.333850  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:11.333894  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:11.351608  687772 logs.go:282] 0 containers: []
	W1223 00:02:11.351631  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:11.351673  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:11.371003  687772 logs.go:282] 0 containers: []
	W1223 00:02:11.371024  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:11.371065  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:11.388642  687772 logs.go:282] 0 containers: []
	W1223 00:02:11.388662  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:11.388671  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:11.388681  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:11.406664  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:11.406687  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:11.434828  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:11.434851  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:11.481940  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:11.481966  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:11.503051  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:11.503077  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:11.563912  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:11.555767    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:11.556288    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:11.558989    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:11.559407    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:11.560934    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:11.555767    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:11.556288    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:11.558989    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:11.559407    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:11.560934    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:14.064094  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:02:12.997638  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:02:15.497547  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:02:15.997757  622784 node_ready.go:38] duration metric: took 6m0.000870759s for node "no-preload-063943" to be "Ready" ...
	I1223 00:02:15.999489  622784 out.go:203] 
	W1223 00:02:16.002745  622784 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1223 00:02:16.002767  622784 out.go:285] * 
	W1223 00:02:16.002971  622784 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1223 00:02:16.004060  622784 out.go:203] 
	I1223 00:02:14.075388  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:14.094051  687772 logs.go:282] 0 containers: []
	W1223 00:02:14.094075  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:14.094123  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:14.112428  687772 logs.go:282] 0 containers: []
	W1223 00:02:14.112454  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:14.112511  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:14.130910  687772 logs.go:282] 0 containers: []
	W1223 00:02:14.130935  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:14.130991  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:14.149172  687772 logs.go:282] 0 containers: []
	W1223 00:02:14.149194  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:14.149247  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:14.167387  687772 logs.go:282] 0 containers: []
	W1223 00:02:14.167414  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:14.167470  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:14.187009  687772 logs.go:282] 0 containers: []
	W1223 00:02:14.187034  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:14.187080  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:14.205514  687772 logs.go:282] 0 containers: []
	W1223 00:02:14.205537  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:14.205604  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:14.223867  687772 logs.go:282] 0 containers: []
	W1223 00:02:14.223893  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:14.223906  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:14.223919  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:14.278850  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:14.272061    5073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:14.272519    5073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:14.274102    5073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:14.274491    5073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:14.275975    5073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:14.272061    5073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:14.272519    5073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:14.274102    5073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:14.274491    5073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:14.275975    5073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:14.278877  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:14.278904  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:14.297791  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:14.297817  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:14.329010  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:14.329035  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:14.375196  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:14.375228  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:16.895760  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:16.908501  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:16.928330  687772 logs.go:282] 0 containers: []
	W1223 00:02:16.928357  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:16.928403  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:16.947248  687772 logs.go:282] 0 containers: []
	W1223 00:02:16.947272  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:16.947319  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:16.967240  687772 logs.go:282] 0 containers: []
	W1223 00:02:16.967266  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:16.967318  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:16.986942  687772 logs.go:282] 0 containers: []
	W1223 00:02:16.986966  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:16.987025  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:17.008674  687772 logs.go:282] 0 containers: []
	W1223 00:02:17.008702  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:17.008760  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:17.030466  687772 logs.go:282] 0 containers: []
	W1223 00:02:17.030492  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:17.030548  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:17.051687  687772 logs.go:282] 0 containers: []
	W1223 00:02:17.051719  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:17.051773  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:17.073457  687772 logs.go:282] 0 containers: []
	W1223 00:02:17.073486  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:17.073502  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:17.073521  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:17.131973  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:17.132010  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:17.157397  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:17.157433  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:17.217639  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:17.209809    5249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:17.210419    5249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:17.212231    5249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:17.212725    5249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:17.214466    5249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:17.209809    5249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:17.210419    5249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:17.212231    5249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:17.212725    5249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:17.214466    5249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:17.217669  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:17.217683  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:17.239498  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:17.239530  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:19.769550  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:19.782360  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:19.802423  687772 logs.go:282] 0 containers: []
	W1223 00:02:19.802446  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:19.802497  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:19.821183  687772 logs.go:282] 0 containers: []
	W1223 00:02:19.821214  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:19.821269  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:19.840343  687772 logs.go:282] 0 containers: []
	W1223 00:02:19.840369  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:19.840426  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:19.857810  687772 logs.go:282] 0 containers: []
	W1223 00:02:19.857835  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:19.857878  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:19.875458  687772 logs.go:282] 0 containers: []
	W1223 00:02:19.875481  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:19.875523  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:19.893840  687772 logs.go:282] 0 containers: []
	W1223 00:02:19.893864  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:19.893916  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:19.912030  687772 logs.go:282] 0 containers: []
	W1223 00:02:19.912053  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:19.912094  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:19.930049  687772 logs.go:282] 0 containers: []
	W1223 00:02:19.930066  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:19.930077  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:19.930088  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:19.976279  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:19.976304  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:19.995814  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:19.995837  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:20.054797  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:20.046679    5408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:20.047269    5408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:20.048969    5408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:20.049570    5408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:20.051329    5408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:20.046679    5408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:20.047269    5408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:20.048969    5408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:20.049570    5408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:20.051329    5408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:20.054819  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:20.054833  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:20.074562  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:20.074588  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:20.651032  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:02:20.702678  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1223 00:02:20.702795  687772 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1223 00:02:22.602868  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:22.614420  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:22.633871  687772 logs.go:282] 0 containers: []
	W1223 00:02:22.633892  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:22.633942  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:22.652376  687772 logs.go:282] 0 containers: []
	W1223 00:02:22.652403  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:22.652454  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:22.670318  687772 logs.go:282] 0 containers: []
	W1223 00:02:22.670340  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:22.670384  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:22.688893  687772 logs.go:282] 0 containers: []
	W1223 00:02:22.688913  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:22.688966  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:22.707579  687772 logs.go:282] 0 containers: []
	W1223 00:02:22.707614  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:22.707667  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:22.726147  687772 logs.go:282] 0 containers: []
	W1223 00:02:22.726174  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:22.726230  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:22.744895  687772 logs.go:282] 0 containers: []
	W1223 00:02:22.744919  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:22.744975  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:22.765807  687772 logs.go:282] 0 containers: []
	W1223 00:02:22.765834  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:22.765848  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:22.765858  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:22.786075  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:22.786111  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:22.814010  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:22.814034  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:22.859717  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:22.859741  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:22.878865  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:22.878889  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:22.933790  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:22.926530    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:22.927059    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:22.928672    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:22.929122    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:22.930663    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:22.926530    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:22.927059    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:22.928672    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:22.929122    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:22.930663    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:25.434500  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:25.446396  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:25.466157  687772 logs.go:282] 0 containers: []
	W1223 00:02:25.466184  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:25.466237  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:25.484799  687772 logs.go:282] 0 containers: []
	W1223 00:02:25.484827  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:25.484899  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:25.503442  687772 logs.go:282] 0 containers: []
	W1223 00:02:25.503470  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:25.503516  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:25.522088  687772 logs.go:282] 0 containers: []
	W1223 00:02:25.522114  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:25.522174  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:25.540899  687772 logs.go:282] 0 containers: []
	W1223 00:02:25.540924  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:25.540979  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:25.559853  687772 logs.go:282] 0 containers: []
	W1223 00:02:25.559877  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:25.559929  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:25.578537  687772 logs.go:282] 0 containers: []
	W1223 00:02:25.578560  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:25.578619  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:25.597442  687772 logs.go:282] 0 containers: []
	W1223 00:02:25.597465  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:25.597476  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:25.597491  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:25.617688  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:25.617718  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:25.672737  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:25.665784    5752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:25.666239    5752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:25.667786    5752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:25.668269    5752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:25.669751    5752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:25.665784    5752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:25.666239    5752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:25.667786    5752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:25.668269    5752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:25.669751    5752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:25.672761  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:25.672777  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:25.691559  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:25.691585  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:25.719893  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:25.719918  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:28.271777  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:28.284248  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:28.304042  687772 logs.go:282] 0 containers: []
	W1223 00:02:28.304069  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:28.304126  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:28.322682  687772 logs.go:282] 0 containers: []
	W1223 00:02:28.322711  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:28.322769  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:28.340899  687772 logs.go:282] 0 containers: []
	W1223 00:02:28.340925  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:28.340974  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:28.359896  687772 logs.go:282] 0 containers: []
	W1223 00:02:28.359922  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:28.359976  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:28.378627  687772 logs.go:282] 0 containers: []
	W1223 00:02:28.378650  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:28.378700  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:28.396793  687772 logs.go:282] 0 containers: []
	W1223 00:02:28.396821  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:28.396870  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:28.415408  687772 logs.go:282] 0 containers: []
	W1223 00:02:28.415434  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:28.415480  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:28.434108  687772 logs.go:282] 0 containers: []
	W1223 00:02:28.434131  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:28.434142  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:28.434153  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:28.462377  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:28.462405  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:28.509046  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:28.509080  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:28.531034  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:28.531065  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:28.587866  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:28.580703    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:28.581207    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:28.582777    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:28.583174    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:28.584683    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:28.580703    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:28.581207    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:28.582777    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:28.583174    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:28.584683    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:28.587904  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:28.587920  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:31.109730  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:31.121215  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:31.140775  687772 logs.go:282] 0 containers: []
	W1223 00:02:31.140799  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:31.140853  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:31.160694  687772 logs.go:282] 0 containers: []
	W1223 00:02:31.160719  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:31.160766  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:31.180064  687772 logs.go:282] 0 containers: []
	W1223 00:02:31.180087  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:31.180133  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:31.198777  687772 logs.go:282] 0 containers: []
	W1223 00:02:31.198802  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:31.198856  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:31.217848  687772 logs.go:282] 0 containers: []
	W1223 00:02:31.217875  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:31.217923  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:31.237167  687772 logs.go:282] 0 containers: []
	W1223 00:02:31.237196  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:31.237251  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:31.257964  687772 logs.go:282] 0 containers: []
	W1223 00:02:31.257995  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:31.258056  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:31.279556  687772 logs.go:282] 0 containers: []
	W1223 00:02:31.279581  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:31.279607  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:31.279624  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:31.336644  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:31.329447    6086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:31.329953    6086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:31.331523    6086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:31.332012    6086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:31.333520    6086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:31.329447    6086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:31.329953    6086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:31.331523    6086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:31.332012    6086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:31.333520    6086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:31.336664  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:31.336675  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:31.355102  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:31.355129  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:31.384063  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:31.384096  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:31.429299  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:31.429337  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:33.951226  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:33.962558  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:33.981280  687772 logs.go:282] 0 containers: []
	W1223 00:02:33.981301  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:33.981353  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:34.000326  687772 logs.go:282] 0 containers: []
	W1223 00:02:34.000351  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:34.000417  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:34.020043  687772 logs.go:282] 0 containers: []
	W1223 00:02:34.020069  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:34.020114  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:34.042279  687772 logs.go:282] 0 containers: []
	W1223 00:02:34.042304  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:34.042363  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:34.060550  687772 logs.go:282] 0 containers: []
	W1223 00:02:34.060571  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:34.060631  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:34.078917  687772 logs.go:282] 0 containers: []
	W1223 00:02:34.078939  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:34.078986  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:34.098151  687772 logs.go:282] 0 containers: []
	W1223 00:02:34.098177  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:34.098224  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:34.117100  687772 logs.go:282] 0 containers: []
	W1223 00:02:34.117124  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:34.117137  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:34.117153  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:34.138330  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:34.138358  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:34.193562  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:34.186453    6246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:34.187024    6246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:34.188567    6246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:34.188984    6246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:34.190534    6246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:34.186453    6246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:34.187024    6246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:34.188567    6246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:34.188984    6246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:34.190534    6246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:34.193588  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:34.193615  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:34.212264  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:34.212288  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:34.240368  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:34.240399  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:36.793206  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:36.804783  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:36.823535  687772 logs.go:282] 0 containers: []
	W1223 00:02:36.823556  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:36.823618  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:36.841856  687772 logs.go:282] 0 containers: []
	W1223 00:02:36.841879  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:36.841933  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:36.860292  687772 logs.go:282] 0 containers: []
	W1223 00:02:36.860319  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:36.860360  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:36.878691  687772 logs.go:282] 0 containers: []
	W1223 00:02:36.878719  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:36.878773  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:36.897448  687772 logs.go:282] 0 containers: []
	W1223 00:02:36.897472  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:36.897519  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:36.916562  687772 logs.go:282] 0 containers: []
	W1223 00:02:36.916585  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:36.916654  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:36.934784  687772 logs.go:282] 0 containers: []
	W1223 00:02:36.934807  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:36.934865  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:36.953285  687772 logs.go:282] 0 containers: []
	W1223 00:02:36.953305  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:36.953317  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:36.953328  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:37.000978  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:37.001008  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:37.021185  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:37.021217  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:37.081314  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:37.074072    6419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:37.074651    6419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:37.076251    6419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:37.076693    6419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:37.078295    6419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:37.074072    6419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:37.074651    6419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:37.076251    6419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:37.076693    6419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:37.078295    6419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:37.081345  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:37.081366  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:37.100453  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:37.100480  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:39.629693  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:39.641060  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:39.660163  687772 logs.go:282] 0 containers: []
	W1223 00:02:39.660187  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:39.660232  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:39.680357  687772 logs.go:282] 0 containers: []
	W1223 00:02:39.680379  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:39.680422  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:39.699821  687772 logs.go:282] 0 containers: []
	W1223 00:02:39.699853  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:39.699916  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:39.719383  687772 logs.go:282] 0 containers: []
	W1223 00:02:39.719407  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:39.719460  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:39.739699  687772 logs.go:282] 0 containers: []
	W1223 00:02:39.739726  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:39.739800  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:39.758766  687772 logs.go:282] 0 containers: []
	W1223 00:02:39.758791  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:39.758849  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:39.777656  687772 logs.go:282] 0 containers: []
	W1223 00:02:39.777690  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:39.777752  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:39.796962  687772 logs.go:282] 0 containers: []
	W1223 00:02:39.796984  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:39.796995  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:39.797006  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:39.842320  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:39.842347  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:39.862054  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:39.862080  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:39.916930  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:39.910012    6584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:39.910586    6584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:39.912148    6584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:39.912544    6584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:39.913971    6584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:39.910012    6584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:39.910586    6584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:39.912148    6584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:39.912544    6584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:39.913971    6584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:39.916953  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:39.916970  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:39.935277  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:39.935306  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:40.946301  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:02:41.000005  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1223 00:02:41.000109  687772 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1223 00:02:41.001884  687772 out.go:179] * Enabled addons: 
	I1223 00:02:41.002846  687772 addons.go:530] duration metric: took 1m58.614813363s for enable addons: enabled=[]
	I1223 00:02:42.463498  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:42.474861  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:42.493733  687772 logs.go:282] 0 containers: []
	W1223 00:02:42.493756  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:42.493806  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:42.513344  687772 logs.go:282] 0 containers: []
	W1223 00:02:42.513376  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:42.513436  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:42.537617  687772 logs.go:282] 0 containers: []
	W1223 00:02:42.537647  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:42.537701  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:42.557673  687772 logs.go:282] 0 containers: []
	W1223 00:02:42.557698  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:42.557746  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:42.576567  687772 logs.go:282] 0 containers: []
	W1223 00:02:42.576604  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:42.576669  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:42.595813  687772 logs.go:282] 0 containers: []
	W1223 00:02:42.595836  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:42.595890  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:42.615074  687772 logs.go:282] 0 containers: []
	W1223 00:02:42.615101  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:42.615154  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:42.634655  687772 logs.go:282] 0 containers: []
	W1223 00:02:42.634685  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:42.634702  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:42.634719  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:42.654826  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:42.654852  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:42.710552  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:42.703396    6762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:42.703921    6762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:42.705447    6762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:42.705896    6762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:42.707365    6762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:42.703396    6762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:42.703921    6762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:42.705447    6762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:42.705896    6762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:42.707365    6762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:42.710573  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:42.710585  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:42.729412  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:42.729439  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:42.758163  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:42.758187  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:45.306682  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:45.318226  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:45.337265  687772 logs.go:282] 0 containers: []
	W1223 00:02:45.337287  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:45.337343  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:45.355924  687772 logs.go:282] 0 containers: []
	W1223 00:02:45.355945  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:45.355990  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:45.374282  687772 logs.go:282] 0 containers: []
	W1223 00:02:45.374303  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:45.374348  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:45.394500  687772 logs.go:282] 0 containers: []
	W1223 00:02:45.394533  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:45.394584  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:45.412466  687772 logs.go:282] 0 containers: []
	W1223 00:02:45.412489  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:45.412538  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:45.431148  687772 logs.go:282] 0 containers: []
	W1223 00:02:45.431185  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:45.431234  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:45.450281  687772 logs.go:282] 0 containers: []
	W1223 00:02:45.450303  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:45.450352  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:45.468758  687772 logs.go:282] 0 containers: []
	W1223 00:02:45.468787  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:45.468804  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:45.468818  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:45.520708  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:45.520742  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:45.542983  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:45.543013  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:45.598778  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:45.591672    6934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:45.592201    6934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:45.593887    6934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:45.594424    6934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:45.595931    6934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:45.591672    6934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:45.592201    6934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:45.593887    6934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:45.594424    6934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:45.595931    6934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:45.598798  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:45.598812  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:45.617903  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:45.617931  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:48.156370  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:48.167842  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:48.187202  687772 logs.go:282] 0 containers: []
	W1223 00:02:48.187224  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:48.187268  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:48.206448  687772 logs.go:282] 0 containers: []
	W1223 00:02:48.206471  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:48.206516  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:48.225302  687772 logs.go:282] 0 containers: []
	W1223 00:02:48.225322  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:48.225373  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:48.244155  687772 logs.go:282] 0 containers: []
	W1223 00:02:48.244185  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:48.244245  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:48.264312  687772 logs.go:282] 0 containers: []
	W1223 00:02:48.264350  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:48.264418  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:48.284233  687772 logs.go:282] 0 containers: []
	W1223 00:02:48.284260  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:48.284317  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:48.303899  687772 logs.go:282] 0 containers: []
	W1223 00:02:48.303924  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:48.303973  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:48.324302  687772 logs.go:282] 0 containers: []
	W1223 00:02:48.324335  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:48.324350  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:48.324366  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:48.345435  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:48.345463  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:48.402949  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:48.395302    7094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:48.395834    7094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:48.397506    7094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:48.398024    7094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:48.399634    7094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:48.395302    7094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:48.395834    7094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:48.397506    7094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:48.398024    7094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:48.399634    7094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:48.402972  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:48.402984  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:48.423927  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:48.423954  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:48.452771  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:48.452799  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:51.001239  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:51.013175  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:51.032822  687772 logs.go:282] 0 containers: []
	W1223 00:02:51.032846  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:51.032898  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:51.051652  687772 logs.go:282] 0 containers: []
	W1223 00:02:51.051682  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:51.051724  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:51.070373  687772 logs.go:282] 0 containers: []
	W1223 00:02:51.070395  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:51.070448  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:51.088655  687772 logs.go:282] 0 containers: []
	W1223 00:02:51.088676  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:51.088732  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:51.108004  687772 logs.go:282] 0 containers: []
	W1223 00:02:51.108025  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:51.108078  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:51.126636  687772 logs.go:282] 0 containers: []
	W1223 00:02:51.126662  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:51.126728  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:51.145355  687772 logs.go:282] 0 containers: []
	W1223 00:02:51.145385  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:51.145451  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:51.164355  687772 logs.go:282] 0 containers: []
	W1223 00:02:51.164384  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:51.164396  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:51.164409  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:51.191698  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:51.191724  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:51.238383  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:51.238411  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:51.260545  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:51.260580  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:51.318147  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:51.310651    7281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:51.311192    7281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:51.312772    7281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:51.313264    7281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:51.314877    7281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:51.310651    7281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:51.311192    7281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:51.312772    7281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:51.313264    7281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:51.314877    7281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:51.318168  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:51.318182  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:53.838848  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:53.850007  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:53.868584  687772 logs.go:282] 0 containers: []
	W1223 00:02:53.868622  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:53.868663  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:53.887617  687772 logs.go:282] 0 containers: []
	W1223 00:02:53.887640  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:53.887687  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:53.906384  687772 logs.go:282] 0 containers: []
	W1223 00:02:53.906409  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:53.906453  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:53.924912  687772 logs.go:282] 0 containers: []
	W1223 00:02:53.924938  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:53.924988  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:53.943400  687772 logs.go:282] 0 containers: []
	W1223 00:02:53.943425  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:53.943477  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:53.961941  687772 logs.go:282] 0 containers: []
	W1223 00:02:53.961969  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:53.962024  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:53.980915  687772 logs.go:282] 0 containers: []
	W1223 00:02:53.980941  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:53.980987  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:53.998798  687772 logs.go:282] 0 containers: []
	W1223 00:02:53.998817  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:53.998827  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:53.998839  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:54.017064  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:54.017089  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:54.045091  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:54.045114  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:54.090278  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:54.090307  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:54.111890  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:54.111920  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:54.166797  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:54.159777    7452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:54.160366    7452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:54.161942    7452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:54.162343    7452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:54.163852    7452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:54.159777    7452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:54.160366    7452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:54.161942    7452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:54.162343    7452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:54.163852    7452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:56.668571  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:56.680147  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:56.699018  687772 logs.go:282] 0 containers: []
	W1223 00:02:56.699042  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:56.699093  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:56.716996  687772 logs.go:282] 0 containers: []
	W1223 00:02:56.717019  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:56.717068  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:56.735529  687772 logs.go:282] 0 containers: []
	W1223 00:02:56.735565  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:56.735644  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:56.756677  687772 logs.go:282] 0 containers: []
	W1223 00:02:56.756701  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:56.756757  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:56.777819  687772 logs.go:282] 0 containers: []
	W1223 00:02:56.777850  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:56.777905  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:56.799967  687772 logs.go:282] 0 containers: []
	W1223 00:02:56.799997  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:56.800054  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:56.818811  687772 logs.go:282] 0 containers: []
	W1223 00:02:56.818836  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:56.818881  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:56.837426  687772 logs.go:282] 0 containers: []
	W1223 00:02:56.837461  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:56.837473  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:56.837487  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:56.893850  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:56.886682    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:56.887224    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:56.888853    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:56.889345    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:56.890925    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:56.886682    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:56.887224    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:56.888853    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:56.889345    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:56.890925    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:56.893879  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:56.893894  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:56.912125  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:56.912151  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:56.939250  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:56.939279  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:56.986566  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:56.986599  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:59.506330  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:59.518294  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:59.540502  687772 logs.go:282] 0 containers: []
	W1223 00:02:59.540529  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:59.540586  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:59.559288  687772 logs.go:282] 0 containers: []
	W1223 00:02:59.559322  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:59.559372  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:59.577919  687772 logs.go:282] 0 containers: []
	W1223 00:02:59.577945  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:59.578002  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:59.596632  687772 logs.go:282] 0 containers: []
	W1223 00:02:59.596655  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:59.596705  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:59.614750  687772 logs.go:282] 0 containers: []
	W1223 00:02:59.614775  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:59.614826  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:59.632989  687772 logs.go:282] 0 containers: []
	W1223 00:02:59.633007  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:59.633057  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:59.650953  687772 logs.go:282] 0 containers: []
	W1223 00:02:59.650972  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:59.651020  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:59.669171  687772 logs.go:282] 0 containers: []
	W1223 00:02:59.669190  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:59.669202  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:59.669214  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:59.713997  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:59.714026  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:59.733682  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:59.733709  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:59.801000  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:59.792717    7761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:59.793343    7761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:59.796120    7761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:59.796686    7761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:59.797955    7761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:59.792717    7761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:59.793343    7761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:59.796120    7761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:59.796686    7761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:59.797955    7761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:59.801018  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:59.801029  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:59.819988  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:59.820018  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:02.350019  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:02.361484  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:02.380765  687772 logs.go:282] 0 containers: []
	W1223 00:03:02.380793  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:02.380841  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:02.398822  687772 logs.go:282] 0 containers: []
	W1223 00:03:02.398847  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:02.398892  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:02.416468  687772 logs.go:282] 0 containers: []
	W1223 00:03:02.416488  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:02.416530  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:02.435155  687772 logs.go:282] 0 containers: []
	W1223 00:03:02.435182  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:02.435237  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:02.453935  687772 logs.go:282] 0 containers: []
	W1223 00:03:02.453961  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:02.454012  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:02.472347  687772 logs.go:282] 0 containers: []
	W1223 00:03:02.472376  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:02.472445  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:02.490480  687772 logs.go:282] 0 containers: []
	W1223 00:03:02.490505  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:02.490562  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:02.510458  687772 logs.go:282] 0 containers: []
	W1223 00:03:02.510485  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:02.510498  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:02.510509  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:02.541744  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:02.541769  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:02.587578  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:02.587619  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:02.607135  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:02.607161  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:02.663082  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:02.655098    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:02.655704    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:02.657931    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:02.658437    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:02.660015    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:02.655098    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:02.655704    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:02.657931    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:02.658437    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:02.660015    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:02.663104  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:02.663117  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:05.182740  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:05.194033  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:05.212783  687772 logs.go:282] 0 containers: []
	W1223 00:03:05.212809  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:05.212868  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:05.230615  687772 logs.go:282] 0 containers: []
	W1223 00:03:05.230643  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:05.230687  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:05.249068  687772 logs.go:282] 0 containers: []
	W1223 00:03:05.249091  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:05.249140  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:05.268884  687772 logs.go:282] 0 containers: []
	W1223 00:03:05.268913  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:05.268965  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:05.288077  687772 logs.go:282] 0 containers: []
	W1223 00:03:05.288103  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:05.288159  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:05.306886  687772 logs.go:282] 0 containers: []
	W1223 00:03:05.306916  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:05.306970  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:05.325552  687772 logs.go:282] 0 containers: []
	W1223 00:03:05.325579  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:05.325644  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:05.344222  687772 logs.go:282] 0 containers: []
	W1223 00:03:05.344252  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:05.344264  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:05.344276  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:05.389222  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:05.389252  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:05.409357  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:05.409384  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:05.466244  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:05.459201    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:05.459757    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:05.461346    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:05.461781    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:05.463277    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:05.459201    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:05.459757    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:05.461346    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:05.461781    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:05.463277    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:05.466269  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:05.466285  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:05.484803  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:05.484830  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:08.013719  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:08.026534  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:08.046545  687772 logs.go:282] 0 containers: []
	W1223 00:03:08.046567  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:08.046633  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:08.065353  687772 logs.go:282] 0 containers: []
	W1223 00:03:08.065375  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:08.065423  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:08.084081  687772 logs.go:282] 0 containers: []
	W1223 00:03:08.084109  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:08.084156  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:08.102488  687772 logs.go:282] 0 containers: []
	W1223 00:03:08.102514  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:08.102570  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:08.121317  687772 logs.go:282] 0 containers: []
	W1223 00:03:08.121347  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:08.121391  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:08.139209  687772 logs.go:282] 0 containers: []
	W1223 00:03:08.139232  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:08.139282  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:08.157445  687772 logs.go:282] 0 containers: []
	W1223 00:03:08.157465  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:08.157510  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:08.177073  687772 logs.go:282] 0 containers: []
	W1223 00:03:08.177101  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:08.177115  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:08.177131  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:08.195188  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:08.195222  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:08.223256  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:08.223282  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:08.270668  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:08.270696  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:08.290331  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:08.290355  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:08.344801  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:08.337927    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:08.338445    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:08.340039    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:08.340476    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:08.341971    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:08.337927    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:08.338445    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:08.340039    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:08.340476    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:08.341971    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:10.846497  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:10.857798  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:10.876797  687772 logs.go:282] 0 containers: []
	W1223 00:03:10.876818  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:10.876863  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:10.895838  687772 logs.go:282] 0 containers: []
	W1223 00:03:10.895862  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:10.895907  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:10.913971  687772 logs.go:282] 0 containers: []
	W1223 00:03:10.913996  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:10.914038  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:10.932422  687772 logs.go:282] 0 containers: []
	W1223 00:03:10.932449  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:10.932501  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:10.951013  687772 logs.go:282] 0 containers: []
	W1223 00:03:10.951034  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:10.951076  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:10.969170  687772 logs.go:282] 0 containers: []
	W1223 00:03:10.969198  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:10.969242  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:10.988274  687772 logs.go:282] 0 containers: []
	W1223 00:03:10.988332  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:10.988382  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:11.006849  687772 logs.go:282] 0 containers: []
	W1223 00:03:11.006875  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:11.006889  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:11.006906  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:11.059569  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:11.059619  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:11.079808  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:11.079835  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:11.134768  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:11.127709    8435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:11.128312    8435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:11.129883    8435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:11.130324    8435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:11.131860    8435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:11.127709    8435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:11.128312    8435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:11.129883    8435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:11.130324    8435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:11.131860    8435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:11.134794  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:11.134817  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:11.153181  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:11.153207  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:13.681510  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:13.692957  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:13.711987  687772 logs.go:282] 0 containers: []
	W1223 00:03:13.712017  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:13.712069  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:13.730999  687772 logs.go:282] 0 containers: []
	W1223 00:03:13.731026  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:13.731083  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:13.753677  687772 logs.go:282] 0 containers: []
	W1223 00:03:13.753709  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:13.753769  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:13.779299  687772 logs.go:282] 0 containers: []
	W1223 00:03:13.779328  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:13.779389  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:13.800195  687772 logs.go:282] 0 containers: []
	W1223 00:03:13.800223  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:13.800269  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:13.818836  687772 logs.go:282] 0 containers: []
	W1223 00:03:13.818861  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:13.818905  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:13.837265  687772 logs.go:282] 0 containers: []
	W1223 00:03:13.837293  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:13.837349  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:13.855911  687772 logs.go:282] 0 containers: []
	W1223 00:03:13.855934  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:13.855944  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:13.855963  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:13.877413  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:13.877442  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:13.932902  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:13.925709    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:13.926137    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:13.927723    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:13.928122    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:13.929665    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:13.925709    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:13.926137    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:13.927723    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:13.928122    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:13.929665    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:13.932922  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:13.932935  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:13.951430  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:13.951455  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:13.979434  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:13.979463  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:16.528395  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:16.539658  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:16.558721  687772 logs.go:282] 0 containers: []
	W1223 00:03:16.558746  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:16.558802  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:16.577097  687772 logs.go:282] 0 containers: []
	W1223 00:03:16.577122  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:16.577169  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:16.594944  687772 logs.go:282] 0 containers: []
	W1223 00:03:16.594973  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:16.595021  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:16.612956  687772 logs.go:282] 0 containers: []
	W1223 00:03:16.612982  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:16.613028  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:16.631601  687772 logs.go:282] 0 containers: []
	W1223 00:03:16.631626  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:16.631689  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:16.650054  687772 logs.go:282] 0 containers: []
	W1223 00:03:16.650077  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:16.650125  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:16.668847  687772 logs.go:282] 0 containers: []
	W1223 00:03:16.668868  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:16.668912  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:16.686862  687772 logs.go:282] 0 containers: []
	W1223 00:03:16.686892  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:16.686906  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:16.686923  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:16.743145  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:16.736059    8755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:16.736637    8755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:16.738192    8755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:16.738624    8755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:16.740098    8755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:16.736059    8755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:16.736637    8755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:16.738192    8755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:16.738624    8755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:16.740098    8755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:16.743166  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:16.743178  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:16.762565  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:16.762607  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:16.794528  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:16.794556  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:16.840343  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:16.840372  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:19.362509  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:19.374211  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:19.393192  687772 logs.go:282] 0 containers: []
	W1223 00:03:19.393216  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:19.393268  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:19.412437  687772 logs.go:282] 0 containers: []
	W1223 00:03:19.412465  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:19.412523  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:19.432373  687772 logs.go:282] 0 containers: []
	W1223 00:03:19.432401  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:19.432460  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:19.452125  687772 logs.go:282] 0 containers: []
	W1223 00:03:19.452159  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:19.452217  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:19.471301  687772 logs.go:282] 0 containers: []
	W1223 00:03:19.471328  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:19.471374  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:19.490544  687772 logs.go:282] 0 containers: []
	W1223 00:03:19.490571  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:19.490643  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:19.510487  687772 logs.go:282] 0 containers: []
	W1223 00:03:19.510508  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:19.510559  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:19.529060  687772 logs.go:282] 0 containers: []
	W1223 00:03:19.529084  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:19.529097  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:19.529112  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:19.574443  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:19.574473  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:19.594488  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:19.594517  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:19.649890  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:19.642446    8930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:19.643046    8930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:19.644619    8930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:19.645108    8930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:19.646648    8930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:19.642446    8930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:19.643046    8930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:19.644619    8930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:19.645108    8930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:19.646648    8930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:19.649910  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:19.649923  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:19.668626  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:19.668651  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:22.198480  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:22.210881  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:22.230438  687772 logs.go:282] 0 containers: []
	W1223 00:03:22.230462  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:22.230522  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:22.248861  687772 logs.go:282] 0 containers: []
	W1223 00:03:22.248882  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:22.248922  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:22.268466  687772 logs.go:282] 0 containers: []
	W1223 00:03:22.268499  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:22.268557  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:22.289199  687772 logs.go:282] 0 containers: []
	W1223 00:03:22.289223  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:22.289268  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:22.307380  687772 logs.go:282] 0 containers: []
	W1223 00:03:22.307405  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:22.307470  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:22.324678  687772 logs.go:282] 0 containers: []
	W1223 00:03:22.324704  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:22.324763  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:22.343704  687772 logs.go:282] 0 containers: []
	W1223 00:03:22.343736  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:22.343791  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:22.362087  687772 logs.go:282] 0 containers: []
	W1223 00:03:22.362117  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:22.362137  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:22.362150  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:22.409818  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:22.409877  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:22.430134  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:22.430165  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:22.485643  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:22.478699    9095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:22.479219    9095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:22.480800    9095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:22.481236    9095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:22.482717    9095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:22.478699    9095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:22.479219    9095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:22.480800    9095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:22.481236    9095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:22.482717    9095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:22.485664  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:22.485680  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:22.504121  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:22.504150  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:25.031881  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:25.043513  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:25.063145  687772 logs.go:282] 0 containers: []
	W1223 00:03:25.063167  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:25.063211  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:25.082000  687772 logs.go:282] 0 containers: []
	W1223 00:03:25.082025  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:25.082074  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:25.099962  687772 logs.go:282] 0 containers: []
	W1223 00:03:25.099984  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:25.100038  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:25.118454  687772 logs.go:282] 0 containers: []
	W1223 00:03:25.118479  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:25.118537  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:25.136993  687772 logs.go:282] 0 containers: []
	W1223 00:03:25.137020  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:25.137069  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:25.155902  687772 logs.go:282] 0 containers: []
	W1223 00:03:25.155925  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:25.155974  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:25.175659  687772 logs.go:282] 0 containers: []
	W1223 00:03:25.175683  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:25.175737  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:25.194139  687772 logs.go:282] 0 containers: []
	W1223 00:03:25.194167  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:25.194180  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:25.194193  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:25.240226  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:25.240258  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:25.261339  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:25.261367  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:25.320736  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:25.313498    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:25.314072    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:25.315614    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:25.316005    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:25.317501    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:25.313498    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:25.314072    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:25.315614    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:25.316005    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:25.317501    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:25.320756  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:25.320768  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:25.341035  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:25.341064  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:27.870845  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:27.882071  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:27.901298  687772 logs.go:282] 0 containers: []
	W1223 00:03:27.901323  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:27.901382  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:27.919859  687772 logs.go:282] 0 containers: []
	W1223 00:03:27.919880  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:27.919930  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:27.938496  687772 logs.go:282] 0 containers: []
	W1223 00:03:27.938520  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:27.938563  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:27.956888  687772 logs.go:282] 0 containers: []
	W1223 00:03:27.956916  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:27.956972  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:27.975342  687772 logs.go:282] 0 containers: []
	W1223 00:03:27.975362  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:27.975412  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:27.994015  687772 logs.go:282] 0 containers: []
	W1223 00:03:27.994038  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:27.994082  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:28.013037  687772 logs.go:282] 0 containers: []
	W1223 00:03:28.013065  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:28.013125  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:28.033210  687772 logs.go:282] 0 containers: []
	W1223 00:03:28.033234  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:28.033247  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:28.033262  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:28.078861  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:28.078892  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:28.098865  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:28.098890  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:28.154165  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:28.147100    9422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:28.147650    9422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:28.149204    9422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:28.149685    9422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:28.151156    9422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:28.147100    9422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:28.147650    9422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:28.149204    9422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:28.149685    9422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:28.151156    9422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:28.154185  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:28.154197  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:28.172425  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:28.172454  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:30.702937  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:30.714537  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:30.735323  687772 logs.go:282] 0 containers: []
	W1223 00:03:30.735346  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:30.735411  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:30.754342  687772 logs.go:282] 0 containers: []
	W1223 00:03:30.754364  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:30.754416  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:30.773486  687772 logs.go:282] 0 containers: []
	W1223 00:03:30.773513  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:30.773570  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:30.792473  687772 logs.go:282] 0 containers: []
	W1223 00:03:30.792498  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:30.792554  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:30.810955  687772 logs.go:282] 0 containers: []
	W1223 00:03:30.810981  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:30.811028  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:30.829795  687772 logs.go:282] 0 containers: []
	W1223 00:03:30.829816  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:30.829864  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:30.848939  687772 logs.go:282] 0 containers: []
	W1223 00:03:30.848959  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:30.849000  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:30.867397  687772 logs.go:282] 0 containers: []
	W1223 00:03:30.867423  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:30.867435  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:30.867452  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:30.887088  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:30.887116  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:30.942084  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:30.934885    9589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:30.935453    9589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:30.937022    9589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:30.937466    9589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:30.938978    9589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:30.934885    9589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:30.935453    9589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:30.937022    9589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:30.937466    9589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:30.938978    9589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:30.942116  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:30.942130  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:30.960703  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:30.960730  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:30.988334  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:30.988359  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:33.539710  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:33.551147  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:33.569876  687772 logs.go:282] 0 containers: []
	W1223 00:03:33.569899  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:33.569943  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:33.588678  687772 logs.go:282] 0 containers: []
	W1223 00:03:33.588710  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:33.588766  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:33.607229  687772 logs.go:282] 0 containers: []
	W1223 00:03:33.607251  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:33.607302  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:33.625442  687772 logs.go:282] 0 containers: []
	W1223 00:03:33.625466  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:33.625527  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:33.644308  687772 logs.go:282] 0 containers: []
	W1223 00:03:33.644340  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:33.644396  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:33.662684  687772 logs.go:282] 0 containers: []
	W1223 00:03:33.662717  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:33.662786  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:33.681135  687772 logs.go:282] 0 containers: []
	W1223 00:03:33.681161  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:33.681209  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:33.700016  687772 logs.go:282] 0 containers: []
	W1223 00:03:33.700042  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:33.700057  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:33.700070  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:33.718957  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:33.718985  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:33.747390  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:33.747417  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:33.793693  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:33.793722  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:33.815051  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:33.815076  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:33.869709  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:33.862833    9773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:33.863339    9773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:33.864945    9773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:33.865374    9773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:33.866900    9773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:33.862833    9773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:33.863339    9773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:33.864945    9773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:33.865374    9773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:33.866900    9773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:36.371365  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:36.383229  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:36.403744  687772 logs.go:282] 0 containers: []
	W1223 00:03:36.403771  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:36.403818  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:36.422087  687772 logs.go:282] 0 containers: []
	W1223 00:03:36.422109  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:36.422163  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:36.440967  687772 logs.go:282] 0 containers: []
	W1223 00:03:36.440989  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:36.441046  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:36.459110  687772 logs.go:282] 0 containers: []
	W1223 00:03:36.459137  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:36.459184  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:36.477754  687772 logs.go:282] 0 containers: []
	W1223 00:03:36.477781  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:36.477838  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:36.496775  687772 logs.go:282] 0 containers: []
	W1223 00:03:36.496803  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:36.496857  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:36.516542  687772 logs.go:282] 0 containers: []
	W1223 00:03:36.516577  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:36.516652  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:36.537692  687772 logs.go:282] 0 containers: []
	W1223 00:03:36.537720  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:36.537731  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:36.537744  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:36.585346  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:36.585376  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:36.605519  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:36.605545  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:36.660230  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:36.653151    9925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:36.653663    9925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:36.655147    9925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:36.655634    9925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:36.657120    9925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:36.653151    9925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:36.653663    9925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:36.655147    9925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:36.655634    9925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:36.657120    9925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:36.660253  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:36.660269  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:36.678368  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:36.678395  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:39.206672  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:39.218123  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:39.236299  687772 logs.go:282] 0 containers: []
	W1223 00:03:39.236322  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:39.236384  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:39.256168  687772 logs.go:282] 0 containers: []
	W1223 00:03:39.256194  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:39.256256  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:39.278907  687772 logs.go:282] 0 containers: []
	W1223 00:03:39.278934  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:39.278987  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:39.299685  687772 logs.go:282] 0 containers: []
	W1223 00:03:39.299712  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:39.299771  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:39.319824  687772 logs.go:282] 0 containers: []
	W1223 00:03:39.319847  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:39.319890  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:39.339314  687772 logs.go:282] 0 containers: []
	W1223 00:03:39.339340  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:39.339388  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:39.357097  687772 logs.go:282] 0 containers: []
	W1223 00:03:39.357122  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:39.357178  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:39.375484  687772 logs.go:282] 0 containers: []
	W1223 00:03:39.375506  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:39.375518  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:39.375528  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:39.422143  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:39.422171  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:39.442163  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:39.442190  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:39.499251  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:39.491681   10081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:39.492132   10081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:39.493822   10081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:39.494268   10081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:39.495815   10081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:39.491681   10081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:39.492132   10081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:39.493822   10081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:39.494268   10081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:39.495815   10081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:39.499300  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:39.499313  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:39.520555  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:39.520585  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:42.050334  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:42.062329  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:42.081392  687772 logs.go:282] 0 containers: []
	W1223 00:03:42.081414  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:42.081466  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:42.100032  687772 logs.go:282] 0 containers: []
	W1223 00:03:42.100060  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:42.100108  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:42.118667  687772 logs.go:282] 0 containers: []
	W1223 00:03:42.118701  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:42.118755  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:42.137260  687772 logs.go:282] 0 containers: []
	W1223 00:03:42.137280  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:42.137324  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:42.156202  687772 logs.go:282] 0 containers: []
	W1223 00:03:42.156223  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:42.156268  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:42.173781  687772 logs.go:282] 0 containers: []
	W1223 00:03:42.173805  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:42.173849  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:42.191802  687772 logs.go:282] 0 containers: []
	W1223 00:03:42.191823  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:42.191865  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:42.210403  687772 logs.go:282] 0 containers: []
	W1223 00:03:42.210428  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:42.210439  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:42.210451  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:42.257288  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:42.257324  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:42.279921  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:42.279950  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:42.335965  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:42.328933   10249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:42.329474   10249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:42.331040   10249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:42.331482   10249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:42.333076   10249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:42.328933   10249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:42.329474   10249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:42.331040   10249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:42.331482   10249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:42.333076   10249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:42.335989  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:42.336007  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:42.354691  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:42.354717  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:44.883238  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:44.894443  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:44.913117  687772 logs.go:282] 0 containers: []
	W1223 00:03:44.913141  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:44.913198  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:44.931401  687772 logs.go:282] 0 containers: []
	W1223 00:03:44.931426  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:44.931481  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:44.950195  687772 logs.go:282] 0 containers: []
	W1223 00:03:44.950223  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:44.950276  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:44.968485  687772 logs.go:282] 0 containers: []
	W1223 00:03:44.968511  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:44.968566  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:44.987148  687772 logs.go:282] 0 containers: []
	W1223 00:03:44.987171  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:44.987233  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:45.005624  687772 logs.go:282] 0 containers: []
	W1223 00:03:45.005646  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:45.005693  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:45.023699  687772 logs.go:282] 0 containers: []
	W1223 00:03:45.023724  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:45.023791  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:45.042874  687772 logs.go:282] 0 containers: []
	W1223 00:03:45.042892  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:45.042903  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:45.042913  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:45.091063  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:45.091090  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:45.111078  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:45.111104  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:45.165637  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:45.158773   10418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:45.159306   10418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:45.160855   10418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:45.161311   10418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:45.162829   10418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:45.158773   10418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:45.159306   10418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:45.160855   10418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:45.161311   10418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:45.162829   10418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:45.165664  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:45.165680  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:45.183805  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:45.183831  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:47.712691  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:47.724393  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:47.743118  687772 logs.go:282] 0 containers: []
	W1223 00:03:47.743145  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:47.743192  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:47.764020  687772 logs.go:282] 0 containers: []
	W1223 00:03:47.764047  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:47.764100  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:47.784950  687772 logs.go:282] 0 containers: []
	W1223 00:03:47.784979  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:47.785031  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:47.805130  687772 logs.go:282] 0 containers: []
	W1223 00:03:47.805153  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:47.805202  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:47.824818  687772 logs.go:282] 0 containers: []
	W1223 00:03:47.824840  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:47.824881  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:47.842122  687772 logs.go:282] 0 containers: []
	W1223 00:03:47.842142  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:47.842182  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:47.860107  687772 logs.go:282] 0 containers: []
	W1223 00:03:47.860126  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:47.860169  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:47.877957  687772 logs.go:282] 0 containers: []
	W1223 00:03:47.877981  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:47.877991  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:47.878003  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:47.913554  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:47.913583  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:47.959272  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:47.959301  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:47.979197  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:47.979224  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:48.034846  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:48.027499   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:48.028033   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:48.029566   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:48.030004   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:48.031523   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:48.027499   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:48.028033   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:48.029566   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:48.030004   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:48.031523   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:48.034864  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:48.034876  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:50.554653  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:50.565766  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:50.584506  687772 logs.go:282] 0 containers: []
	W1223 00:03:50.584527  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:50.584568  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:50.603087  687772 logs.go:282] 0 containers: []
	W1223 00:03:50.603112  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:50.603159  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:50.621694  687772 logs.go:282] 0 containers: []
	W1223 00:03:50.621718  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:50.621758  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:50.640855  687772 logs.go:282] 0 containers: []
	W1223 00:03:50.640882  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:50.640950  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:50.658573  687772 logs.go:282] 0 containers: []
	W1223 00:03:50.658615  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:50.658659  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:50.676703  687772 logs.go:282] 0 containers: []
	W1223 00:03:50.676725  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:50.676792  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:50.694997  687772 logs.go:282] 0 containers: []
	W1223 00:03:50.695020  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:50.695084  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:50.711361  687772 logs.go:282] 0 containers: []
	W1223 00:03:50.711382  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:50.711393  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:50.711405  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:50.739475  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:50.739500  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:50.789788  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:50.789828  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:50.810067  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:50.810096  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:50.864855  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:50.857771   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:50.858239   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:50.859923   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:50.860349   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:50.861907   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:50.857771   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:50.858239   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:50.859923   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:50.860349   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:50.861907   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:50.864881  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:50.864896  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:53.383457  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:53.394757  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:53.414248  687772 logs.go:282] 0 containers: []
	W1223 00:03:53.414277  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:53.414341  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:53.432950  687772 logs.go:282] 0 containers: []
	W1223 00:03:53.432970  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:53.433020  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:53.452058  687772 logs.go:282] 0 containers: []
	W1223 00:03:53.452081  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:53.452143  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:53.470670  687772 logs.go:282] 0 containers: []
	W1223 00:03:53.470698  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:53.470751  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:53.489416  687772 logs.go:282] 0 containers: []
	W1223 00:03:53.489443  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:53.489486  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:53.508963  687772 logs.go:282] 0 containers: []
	W1223 00:03:53.508995  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:53.509057  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:53.530683  687772 logs.go:282] 0 containers: []
	W1223 00:03:53.530710  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:53.530770  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:53.551545  687772 logs.go:282] 0 containers: []
	W1223 00:03:53.551577  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:53.551610  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:53.551627  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:53.570296  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:53.570324  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:53.598123  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:53.598154  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:53.646248  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:53.646280  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:53.666819  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:53.666844  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:53.722068  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:53.715109   10928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:53.715646   10928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:53.717149   10928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:53.717536   10928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:53.719006   10928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:53.715109   10928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:53.715646   10928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:53.717149   10928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:53.717536   10928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:53.719006   10928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:56.223706  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:56.235187  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:56.255491  687772 logs.go:282] 0 containers: []
	W1223 00:03:56.255511  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:56.255551  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:56.274455  687772 logs.go:282] 0 containers: []
	W1223 00:03:56.274479  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:56.274519  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:56.293621  687772 logs.go:282] 0 containers: []
	W1223 00:03:56.293648  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:56.293702  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:56.312485  687772 logs.go:282] 0 containers: []
	W1223 00:03:56.312511  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:56.312558  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:56.331239  687772 logs.go:282] 0 containers: []
	W1223 00:03:56.331266  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:56.331320  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:56.349793  687772 logs.go:282] 0 containers: []
	W1223 00:03:56.349813  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:56.349856  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:56.368378  687772 logs.go:282] 0 containers: []
	W1223 00:03:56.368397  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:56.368446  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:56.386706  687772 logs.go:282] 0 containers: []
	W1223 00:03:56.386730  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:56.386744  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:56.386759  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:56.435036  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:56.435067  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:56.456766  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:56.456793  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:56.515022  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:56.506534   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:56.507203   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:56.508885   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:56.509323   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:56.510920   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:56.506534   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:56.507203   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:56.508885   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:56.509323   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:56.510920   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:56.515044  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:56.515056  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:56.537382  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:56.537424  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:59.067413  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:59.078926  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:59.098458  687772 logs.go:282] 0 containers: []
	W1223 00:03:59.098490  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:59.098543  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:59.119074  687772 logs.go:282] 0 containers: []
	W1223 00:03:59.119100  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:59.119146  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:59.138014  687772 logs.go:282] 0 containers: []
	W1223 00:03:59.138036  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:59.138082  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:59.157367  687772 logs.go:282] 0 containers: []
	W1223 00:03:59.157390  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:59.157433  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:59.175923  687772 logs.go:282] 0 containers: []
	W1223 00:03:59.175950  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:59.176008  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:59.194211  687772 logs.go:282] 0 containers: []
	W1223 00:03:59.194243  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:59.194295  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:59.212980  687772 logs.go:282] 0 containers: []
	W1223 00:03:59.213004  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:59.213050  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:59.231233  687772 logs.go:282] 0 containers: []
	W1223 00:03:59.231255  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:59.231266  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:59.231277  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:59.260354  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:59.260377  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:59.307751  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:59.307784  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:59.327756  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:59.327782  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:59.382873  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:59.375811   11264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:59.376331   11264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:59.377895   11264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:59.378317   11264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:59.379902   11264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:59.375811   11264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:59.376331   11264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:59.377895   11264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:59.378317   11264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:59.379902   11264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:59.382895  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:59.382908  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:01.903304  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:01.914514  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:01.933300  687772 logs.go:282] 0 containers: []
	W1223 00:04:01.933328  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:01.933388  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:01.952153  687772 logs.go:282] 0 containers: []
	W1223 00:04:01.952181  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:01.952225  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:01.970903  687772 logs.go:282] 0 containers: []
	W1223 00:04:01.970933  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:01.970987  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:01.989493  687772 logs.go:282] 0 containers: []
	W1223 00:04:01.989513  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:01.989567  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:02.009114  687772 logs.go:282] 0 containers: []
	W1223 00:04:02.009141  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:02.009198  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:02.030277  687772 logs.go:282] 0 containers: []
	W1223 00:04:02.030310  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:02.030365  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:02.050466  687772 logs.go:282] 0 containers: []
	W1223 00:04:02.050492  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:02.050551  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:02.069917  687772 logs.go:282] 0 containers: []
	W1223 00:04:02.069941  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:02.069956  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:02.069970  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:02.115721  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:02.115750  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:02.135348  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:02.135373  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:02.190691  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:02.183688   11415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:02.184205   11415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:02.185799   11415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:02.186209   11415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:02.187682   11415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:02.183688   11415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:02.184205   11415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:02.185799   11415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:02.186209   11415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:02.187682   11415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:02.190712  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:02.190724  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:02.209097  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:02.209122  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:04.737357  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:04.748553  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:04.770341  687772 logs.go:282] 0 containers: []
	W1223 00:04:04.770369  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:04.770424  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:04.791137  687772 logs.go:282] 0 containers: []
	W1223 00:04:04.791165  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:04.791214  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:04.810520  687772 logs.go:282] 0 containers: []
	W1223 00:04:04.810541  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:04.810607  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:04.828972  687772 logs.go:282] 0 containers: []
	W1223 00:04:04.829000  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:04.829055  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:04.849074  687772 logs.go:282] 0 containers: []
	W1223 00:04:04.849096  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:04.849148  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:04.868041  687772 logs.go:282] 0 containers: []
	W1223 00:04:04.868063  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:04.868115  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:04.886481  687772 logs.go:282] 0 containers: []
	W1223 00:04:04.886504  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:04.886567  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:04.905235  687772 logs.go:282] 0 containers: []
	W1223 00:04:04.905262  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:04.905274  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:04.905285  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:04.953851  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:04.953880  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:04.973781  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:04.973806  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:05.031345  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:05.024020   11570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:05.024585   11570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:05.026291   11570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:05.026768   11570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:05.028137   11570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:05.024020   11570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:05.024585   11570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:05.026291   11570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:05.026768   11570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:05.028137   11570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:05.031368  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:05.031383  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:05.050812  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:05.050839  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:07.580204  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:07.592091  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:07.611238  687772 logs.go:282] 0 containers: []
	W1223 00:04:07.611267  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:07.611318  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:07.630713  687772 logs.go:282] 0 containers: []
	W1223 00:04:07.630736  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:07.630786  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:07.649511  687772 logs.go:282] 0 containers: []
	W1223 00:04:07.649541  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:07.649620  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:07.668236  687772 logs.go:282] 0 containers: []
	W1223 00:04:07.668264  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:07.668323  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:07.687077  687772 logs.go:282] 0 containers: []
	W1223 00:04:07.687101  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:07.687158  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:07.705952  687772 logs.go:282] 0 containers: []
	W1223 00:04:07.705982  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:07.706036  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:07.725156  687772 logs.go:282] 0 containers: []
	W1223 00:04:07.725178  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:07.725224  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:07.744024  687772 logs.go:282] 0 containers: []
	W1223 00:04:07.744049  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:07.744063  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:07.744079  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:07.797680  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:07.797721  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:07.819453  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:07.819481  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:07.875026  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:07.867909   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:07.868453   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:07.870037   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:07.870465   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:07.872022   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:07.867909   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:07.868453   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:07.870037   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:07.870465   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:07.872022   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:07.875046  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:07.875059  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:07.893942  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:07.893968  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:10.422234  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:10.433749  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:10.453027  687772 logs.go:282] 0 containers: []
	W1223 00:04:10.453049  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:10.453099  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:10.471766  687772 logs.go:282] 0 containers: []
	W1223 00:04:10.471789  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:10.471840  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:10.489960  687772 logs.go:282] 0 containers: []
	W1223 00:04:10.489981  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:10.490025  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:10.508537  687772 logs.go:282] 0 containers: []
	W1223 00:04:10.508558  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:10.508614  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:10.527336  687772 logs.go:282] 0 containers: []
	W1223 00:04:10.527362  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:10.527418  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:10.545995  687772 logs.go:282] 0 containers: []
	W1223 00:04:10.546019  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:10.546074  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:10.564167  687772 logs.go:282] 0 containers: []
	W1223 00:04:10.564196  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:10.564254  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:10.582919  687772 logs.go:282] 0 containers: []
	W1223 00:04:10.582947  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:10.582961  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:10.582974  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:10.630969  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:10.631004  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:10.651161  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:10.651197  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:10.709000  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:10.701750   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:10.702399   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:10.703967   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:10.704377   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:10.705928   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:10.701750   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:10.702399   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:10.703967   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:10.704377   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:10.705928   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:10.709026  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:10.709041  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:10.728175  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:10.728203  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:13.258812  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:13.271437  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:13.293437  687772 logs.go:282] 0 containers: []
	W1223 00:04:13.293468  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:13.293525  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:13.313483  687772 logs.go:282] 0 containers: []
	W1223 00:04:13.313508  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:13.313568  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:13.333612  687772 logs.go:282] 0 containers: []
	W1223 00:04:13.333643  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:13.333709  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:13.353086  687772 logs.go:282] 0 containers: []
	W1223 00:04:13.353111  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:13.353169  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:13.372208  687772 logs.go:282] 0 containers: []
	W1223 00:04:13.372230  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:13.372275  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:13.391431  687772 logs.go:282] 0 containers: []
	W1223 00:04:13.391457  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:13.391507  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:13.410402  687772 logs.go:282] 0 containers: []
	W1223 00:04:13.410434  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:13.410502  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:13.428653  687772 logs.go:282] 0 containers: []
	W1223 00:04:13.428675  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:13.428687  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:13.428709  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:13.474690  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:13.474729  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:13.495426  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:13.495457  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:13.550790  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:13.543422   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:13.544009   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:13.545544   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:13.546130   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:13.547692   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:13.543422   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:13.544009   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:13.545544   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:13.546130   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:13.547692   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:13.550810  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:13.550822  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:13.569370  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:13.569397  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:16.099133  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:16.110484  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:16.129712  687772 logs.go:282] 0 containers: []
	W1223 00:04:16.129743  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:16.129808  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:16.147785  687772 logs.go:282] 0 containers: []
	W1223 00:04:16.147808  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:16.147854  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:16.167259  687772 logs.go:282] 0 containers: []
	W1223 00:04:16.167284  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:16.167333  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:16.186151  687772 logs.go:282] 0 containers: []
	W1223 00:04:16.186178  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:16.186223  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:16.206074  687772 logs.go:282] 0 containers: []
	W1223 00:04:16.206099  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:16.206154  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:16.225296  687772 logs.go:282] 0 containers: []
	W1223 00:04:16.225319  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:16.225369  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:16.244091  687772 logs.go:282] 0 containers: []
	W1223 00:04:16.244115  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:16.244160  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:16.263620  687772 logs.go:282] 0 containers: []
	W1223 00:04:16.263643  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:16.263655  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:16.263667  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:16.323241  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:16.316239   12235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:16.316726   12235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:16.318256   12235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:16.318676   12235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:16.319891   12235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:16.316239   12235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:16.316726   12235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:16.318256   12235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:16.318676   12235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:16.319891   12235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:16.323265  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:16.323281  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:16.342320  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:16.342346  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:16.371156  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:16.371183  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:16.421158  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:16.421188  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:18.942795  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:18.954257  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:18.974190  687772 logs.go:282] 0 containers: []
	W1223 00:04:18.974217  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:18.974270  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:18.993178  687772 logs.go:282] 0 containers: []
	W1223 00:04:18.993200  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:18.993245  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:19.013377  687772 logs.go:282] 0 containers: []
	W1223 00:04:19.013405  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:19.013465  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:19.034917  687772 logs.go:282] 0 containers: []
	W1223 00:04:19.034941  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:19.034990  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:19.054247  687772 logs.go:282] 0 containers: []
	W1223 00:04:19.054271  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:19.054326  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:19.072206  687772 logs.go:282] 0 containers: []
	W1223 00:04:19.072235  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:19.072297  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:19.091855  687772 logs.go:282] 0 containers: []
	W1223 00:04:19.091882  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:19.091933  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:19.111067  687772 logs.go:282] 0 containers: []
	W1223 00:04:19.111100  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:19.111114  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:19.111127  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:19.161923  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:19.161955  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:19.182679  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:19.182708  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:19.239475  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:19.232458   12393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:19.233037   12393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:19.234582   12393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:19.234997   12393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:19.236569   12393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:19.232458   12393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:19.233037   12393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:19.234582   12393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:19.234997   12393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:19.236569   12393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:19.239503  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:19.239521  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:19.259046  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:19.259075  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:21.799246  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:21.810742  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:21.830826  687772 logs.go:282] 0 containers: []
	W1223 00:04:21.830852  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:21.830896  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:21.849427  687772 logs.go:282] 0 containers: []
	W1223 00:04:21.849455  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:21.849501  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:21.867823  687772 logs.go:282] 0 containers: []
	W1223 00:04:21.867847  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:21.867891  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:21.886431  687772 logs.go:282] 0 containers: []
	W1223 00:04:21.886452  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:21.886508  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:21.905079  687772 logs.go:282] 0 containers: []
	W1223 00:04:21.905103  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:21.905160  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:21.923344  687772 logs.go:282] 0 containers: []
	W1223 00:04:21.923365  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:21.923407  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:21.941945  687772 logs.go:282] 0 containers: []
	W1223 00:04:21.941966  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:21.942012  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:21.959749  687772 logs.go:282] 0 containers: []
	W1223 00:04:21.959773  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:21.959785  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:21.959795  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:21.979750  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:21.979776  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:22.008278  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:22.008301  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:22.059988  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:22.060022  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:22.080174  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:22.080201  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:22.135625  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:22.128550   12578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:22.129064   12578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:22.130551   12578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:22.130965   12578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:22.132436   12578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:22.128550   12578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:22.129064   12578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:22.130551   12578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:22.130965   12578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:22.132436   12578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:24.636526  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:24.647769  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:24.666800  687772 logs.go:282] 0 containers: []
	W1223 00:04:24.666823  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:24.666873  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:24.685078  687772 logs.go:282] 0 containers: []
	W1223 00:04:24.685100  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:24.685153  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:24.703219  687772 logs.go:282] 0 containers: []
	W1223 00:04:24.703238  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:24.703287  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:24.721619  687772 logs.go:282] 0 containers: []
	W1223 00:04:24.721647  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:24.721705  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:24.740548  687772 logs.go:282] 0 containers: []
	W1223 00:04:24.740570  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:24.740632  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:24.758544  687772 logs.go:282] 0 containers: []
	W1223 00:04:24.758568  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:24.758633  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:24.776285  687772 logs.go:282] 0 containers: []
	W1223 00:04:24.776317  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:24.776445  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:24.794360  687772 logs.go:282] 0 containers: []
	W1223 00:04:24.794386  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:24.794399  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:24.794413  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:24.840111  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:24.840142  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:24.860260  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:24.860286  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:24.915702  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:24.908230   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:24.908821   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:24.910346   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:24.910801   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:24.912322   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:24.908230   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:24.908821   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:24.910346   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:24.910801   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:24.912322   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:24.915723  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:24.915736  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:24.934368  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:24.934394  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:27.463653  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:27.474997  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:27.494098  687772 logs.go:282] 0 containers: []
	W1223 00:04:27.494127  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:27.494183  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:27.513771  687772 logs.go:282] 0 containers: []
	W1223 00:04:27.513799  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:27.513855  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:27.534688  687772 logs.go:282] 0 containers: []
	W1223 00:04:27.534720  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:27.534777  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:27.553043  687772 logs.go:282] 0 containers: []
	W1223 00:04:27.553065  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:27.553115  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:27.571979  687772 logs.go:282] 0 containers: []
	W1223 00:04:27.572005  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:27.572049  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:27.590357  687772 logs.go:282] 0 containers: []
	W1223 00:04:27.590376  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:27.590419  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:27.609465  687772 logs.go:282] 0 containers: []
	W1223 00:04:27.609490  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:27.609547  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:27.628214  687772 logs.go:282] 0 containers: []
	W1223 00:04:27.628238  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:27.628253  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:27.628267  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:27.646519  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:27.646545  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:27.674935  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:27.674958  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:27.721277  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:27.721306  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:27.741140  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:27.741165  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:27.796676  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:27.789709   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:27.790246   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:27.791825   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:27.792273   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:27.793752   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:27.789709   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:27.790246   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:27.791825   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:27.792273   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:27.793752   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:30.297779  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:30.308987  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:30.327806  687772 logs.go:282] 0 containers: []
	W1223 00:04:30.327827  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:30.327885  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:30.347142  687772 logs.go:282] 0 containers: []
	W1223 00:04:30.347165  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:30.347216  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:30.365629  687772 logs.go:282] 0 containers: []
	W1223 00:04:30.365656  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:30.365729  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:30.383470  687772 logs.go:282] 0 containers: []
	W1223 00:04:30.383496  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:30.383552  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:30.402127  687772 logs.go:282] 0 containers: []
	W1223 00:04:30.402152  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:30.402214  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:30.420681  687772 logs.go:282] 0 containers: []
	W1223 00:04:30.420706  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:30.420757  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:30.439453  687772 logs.go:282] 0 containers: []
	W1223 00:04:30.439475  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:30.439517  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:30.458669  687772 logs.go:282] 0 containers: []
	W1223 00:04:30.458691  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:30.458702  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:30.458713  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:30.505022  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:30.505050  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:30.528295  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:30.528323  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:30.585055  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:30.577823   13062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:30.578469   13062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:30.580017   13062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:30.580458   13062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:30.582038   13062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:30.577823   13062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:30.578469   13062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:30.580017   13062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:30.580458   13062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:30.582038   13062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:30.585076  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:30.585088  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:30.604200  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:30.604229  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:33.131779  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:33.143670  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:33.163179  687772 logs.go:282] 0 containers: []
	W1223 00:04:33.163200  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:33.163245  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:33.182970  687772 logs.go:282] 0 containers: []
	W1223 00:04:33.182992  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:33.183043  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:33.201569  687772 logs.go:282] 0 containers: []
	W1223 00:04:33.201609  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:33.201656  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:33.219907  687772 logs.go:282] 0 containers: []
	W1223 00:04:33.219931  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:33.219989  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:33.239604  687772 logs.go:282] 0 containers: []
	W1223 00:04:33.239630  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:33.239675  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:33.258182  687772 logs.go:282] 0 containers: []
	W1223 00:04:33.258211  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:33.258263  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:33.277606  687772 logs.go:282] 0 containers: []
	W1223 00:04:33.277632  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:33.277678  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:33.297258  687772 logs.go:282] 0 containers: []
	W1223 00:04:33.297283  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:33.297296  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:33.297312  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:33.344903  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:33.344932  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:33.364742  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:33.364768  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:33.420528  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:33.413527   13221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:33.414059   13221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:33.415546   13221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:33.416007   13221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:33.417495   13221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:33.413527   13221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:33.414059   13221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:33.415546   13221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:33.416007   13221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:33.417495   13221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:33.420549  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:33.420560  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:33.439384  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:33.439411  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:35.968903  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:35.980276  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:35.999444  687772 logs.go:282] 0 containers: []
	W1223 00:04:35.999474  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:35.999534  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:36.018792  687772 logs.go:282] 0 containers: []
	W1223 00:04:36.018819  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:36.018880  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:36.036956  687772 logs.go:282] 0 containers: []
	W1223 00:04:36.036985  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:36.037043  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:36.055239  687772 logs.go:282] 0 containers: []
	W1223 00:04:36.055265  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:36.055315  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:36.073241  687772 logs.go:282] 0 containers: []
	W1223 00:04:36.073272  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:36.073325  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:36.091575  687772 logs.go:282] 0 containers: []
	W1223 00:04:36.091613  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:36.091662  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:36.110369  687772 logs.go:282] 0 containers: []
	W1223 00:04:36.110396  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:36.110448  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:36.128481  687772 logs.go:282] 0 containers: []
	W1223 00:04:36.128505  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:36.128516  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:36.128526  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:36.176492  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:36.176526  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:36.196649  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:36.196675  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:36.253201  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:36.245327   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:36.245908   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:36.247446   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:36.247880   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:36.249674   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:36.245327   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:36.245908   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:36.247446   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:36.247880   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:36.249674   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:36.253224  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:36.253241  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:36.273351  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:36.273379  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:38.804411  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:38.815899  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:38.834644  687772 logs.go:282] 0 containers: []
	W1223 00:04:38.834668  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:38.834713  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:38.853892  687772 logs.go:282] 0 containers: []
	W1223 00:04:38.853919  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:38.853967  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:38.871484  687772 logs.go:282] 0 containers: []
	W1223 00:04:38.871505  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:38.871554  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:38.889803  687772 logs.go:282] 0 containers: []
	W1223 00:04:38.889828  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:38.889879  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:38.909558  687772 logs.go:282] 0 containers: []
	W1223 00:04:38.909586  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:38.909652  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:38.929528  687772 logs.go:282] 0 containers: []
	W1223 00:04:38.929553  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:38.929624  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:38.948153  687772 logs.go:282] 0 containers: []
	W1223 00:04:38.948181  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:38.948241  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:38.966657  687772 logs.go:282] 0 containers: []
	W1223 00:04:38.966679  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:38.966689  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:38.966711  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:38.994610  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:38.994637  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:39.040694  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:39.040722  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:39.060391  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:39.060417  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:39.116169  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:39.108908   13569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:39.109405   13569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:39.111037   13569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:39.111517   13569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:39.113022   13569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:39.108908   13569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:39.109405   13569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:39.111037   13569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:39.111517   13569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:39.113022   13569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:39.116189  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:39.116201  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:41.638009  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:41.650427  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:41.670214  687772 logs.go:282] 0 containers: []
	W1223 00:04:41.670241  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:41.670289  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:41.689539  687772 logs.go:282] 0 containers: []
	W1223 00:04:41.689568  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:41.689651  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:41.708449  687772 logs.go:282] 0 containers: []
	W1223 00:04:41.708472  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:41.708520  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:41.727897  687772 logs.go:282] 0 containers: []
	W1223 00:04:41.727918  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:41.727963  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:41.748169  687772 logs.go:282] 0 containers: []
	W1223 00:04:41.748200  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:41.748252  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:41.767148  687772 logs.go:282] 0 containers: []
	W1223 00:04:41.767172  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:41.767224  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:41.789562  687772 logs.go:282] 0 containers: []
	W1223 00:04:41.789589  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:41.789665  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:41.808259  687772 logs.go:282] 0 containers: []
	W1223 00:04:41.808281  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:41.808292  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:41.808304  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:41.827093  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:41.827120  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:41.854644  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:41.854671  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:41.901960  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:41.901995  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:41.921983  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:41.922011  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:41.978723  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:41.971457   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:41.971976   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:41.973486   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:41.973968   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:41.975679   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:41.971457   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:41.971976   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:41.973486   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:41.973968   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:41.975679   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:44.479583  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:44.491055  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:44.513749  687772 logs.go:282] 0 containers: []
	W1223 00:04:44.513779  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:44.513836  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:44.535619  687772 logs.go:282] 0 containers: []
	W1223 00:04:44.535648  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:44.535722  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:44.555441  687772 logs.go:282] 0 containers: []
	W1223 00:04:44.555464  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:44.555512  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:44.574828  687772 logs.go:282] 0 containers: []
	W1223 00:04:44.574851  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:44.574895  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:44.593270  687772 logs.go:282] 0 containers: []
	W1223 00:04:44.593293  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:44.593350  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:44.612157  687772 logs.go:282] 0 containers: []
	W1223 00:04:44.612182  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:44.612239  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:44.630342  687772 logs.go:282] 0 containers: []
	W1223 00:04:44.630366  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:44.630417  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:44.648864  687772 logs.go:282] 0 containers: []
	W1223 00:04:44.648893  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:44.648905  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:44.648917  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:44.698462  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:44.698494  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:44.718432  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:44.718463  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:44.777738  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:44.767938   13879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:44.768520   13879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:44.770129   13879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:44.770658   13879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:44.773585   13879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:44.767938   13879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:44.768520   13879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:44.770129   13879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:44.770658   13879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:44.773585   13879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:44.777764  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:44.777781  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:44.798488  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:44.798522  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:47.328787  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:47.340091  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:47.359764  687772 logs.go:282] 0 containers: []
	W1223 00:04:47.359786  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:47.359834  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:47.378531  687772 logs.go:282] 0 containers: []
	W1223 00:04:47.378557  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:47.378633  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:47.397279  687772 logs.go:282] 0 containers: []
	W1223 00:04:47.397303  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:47.397351  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:47.415379  687772 logs.go:282] 0 containers: []
	W1223 00:04:47.415404  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:47.415449  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:47.433342  687772 logs.go:282] 0 containers: []
	W1223 00:04:47.433363  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:47.433407  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:47.452134  687772 logs.go:282] 0 containers: []
	W1223 00:04:47.452153  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:47.452195  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:47.470492  687772 logs.go:282] 0 containers: []
	W1223 00:04:47.470514  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:47.470565  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:47.489435  687772 logs.go:282] 0 containers: []
	W1223 00:04:47.489462  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:47.489475  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:47.489490  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:47.543310  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:47.543341  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:47.563678  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:47.563716  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:47.618877  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:47.611492   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:47.612043   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:47.613658   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:47.614136   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:47.615686   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:47.611492   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:47.612043   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:47.613658   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:47.614136   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:47.615686   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:47.618902  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:47.618916  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:47.637117  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:47.637142  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:50.165288  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:50.176485  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:50.195504  687772 logs.go:282] 0 containers: []
	W1223 00:04:50.195530  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:50.195573  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:50.214411  687772 logs.go:282] 0 containers: []
	W1223 00:04:50.214435  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:50.214486  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:50.232050  687772 logs.go:282] 0 containers: []
	W1223 00:04:50.232073  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:50.232113  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:50.249723  687772 logs.go:282] 0 containers: []
	W1223 00:04:50.249747  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:50.249805  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:50.269197  687772 logs.go:282] 0 containers: []
	W1223 00:04:50.269220  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:50.269262  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:50.287018  687772 logs.go:282] 0 containers: []
	W1223 00:04:50.287042  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:50.287084  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:50.304852  687772 logs.go:282] 0 containers: []
	W1223 00:04:50.304876  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:50.304923  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:50.323126  687772 logs.go:282] 0 containers: []
	W1223 00:04:50.323150  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:50.323164  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:50.323177  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:50.371303  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:50.371328  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:50.391396  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:50.391419  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:50.446479  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:50.439351   14214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:50.440054   14214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:50.441636   14214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:50.442091   14214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:50.443655   14214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:50.439351   14214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:50.440054   14214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:50.441636   14214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:50.442091   14214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:50.443655   14214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:50.446503  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:50.446519  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:50.466869  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:50.466895  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:53.004783  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:53.016488  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:53.037102  687772 logs.go:282] 0 containers: []
	W1223 00:04:53.037130  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:53.037175  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:53.056487  687772 logs.go:282] 0 containers: []
	W1223 00:04:53.056509  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:53.056551  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:53.074919  687772 logs.go:282] 0 containers: []
	W1223 00:04:53.074938  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:53.074983  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:53.093142  687772 logs.go:282] 0 containers: []
	W1223 00:04:53.093163  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:53.093203  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:53.112007  687772 logs.go:282] 0 containers: []
	W1223 00:04:53.112030  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:53.112079  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:53.130737  687772 logs.go:282] 0 containers: []
	W1223 00:04:53.130759  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:53.130802  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:53.149980  687772 logs.go:282] 0 containers: []
	W1223 00:04:53.150009  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:53.150057  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:53.167468  687772 logs.go:282] 0 containers: []
	W1223 00:04:53.167493  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:53.167503  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:53.167513  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:53.195775  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:53.195800  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:53.243212  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:53.243238  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:53.263047  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:53.263073  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:53.319009  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:53.311761   14403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:53.312309   14403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:53.313970   14403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:53.314449   14403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:53.315934   14403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:53.311761   14403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:53.312309   14403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:53.313970   14403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:53.314449   14403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:53.315934   14403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:53.319029  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:53.319041  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:55.838963  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:55.850169  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:55.868811  687772 logs.go:282] 0 containers: []
	W1223 00:04:55.868833  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:55.868878  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:55.887281  687772 logs.go:282] 0 containers: []
	W1223 00:04:55.887309  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:55.887361  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:55.905343  687772 logs.go:282] 0 containers: []
	W1223 00:04:55.905372  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:55.905425  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:55.922787  687772 logs.go:282] 0 containers: []
	W1223 00:04:55.922811  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:55.922858  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:55.941063  687772 logs.go:282] 0 containers: []
	W1223 00:04:55.941090  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:55.941143  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:55.960388  687772 logs.go:282] 0 containers: []
	W1223 00:04:55.960413  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:55.960549  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:55.978787  687772 logs.go:282] 0 containers: []
	W1223 00:04:55.978810  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:55.978854  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:55.996489  687772 logs.go:282] 0 containers: []
	W1223 00:04:55.996516  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:55.996530  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:55.996542  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:56.048197  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:56.048229  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:56.068640  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:56.068668  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:56.124436  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:56.117357   14557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:56.117926   14557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:56.119480   14557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:56.119940   14557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:56.121489   14557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:56.117357   14557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:56.117926   14557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:56.119480   14557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:56.119940   14557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:56.121489   14557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:56.124461  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:56.124478  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:56.143079  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:56.143102  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:58.672032  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:58.683539  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:58.702739  687772 logs.go:282] 0 containers: []
	W1223 00:04:58.702762  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:58.702814  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:58.721434  687772 logs.go:282] 0 containers: []
	W1223 00:04:58.721465  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:58.721514  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:58.741740  687772 logs.go:282] 0 containers: []
	W1223 00:04:58.741768  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:58.741811  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:58.760960  687772 logs.go:282] 0 containers: []
	W1223 00:04:58.760982  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:58.761035  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:58.780979  687772 logs.go:282] 0 containers: []
	W1223 00:04:58.781001  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:58.781045  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:58.799417  687772 logs.go:282] 0 containers: []
	W1223 00:04:58.799453  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:58.799501  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:58.817985  687772 logs.go:282] 0 containers: []
	W1223 00:04:58.818007  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:58.818051  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:58.837633  687772 logs.go:282] 0 containers: []
	W1223 00:04:58.837659  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:58.837671  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:58.837683  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:58.856421  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:58.856448  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:58.883550  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:58.883574  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:58.932130  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:58.932158  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:58.953160  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:58.953189  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:59.009951  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:59.002105   14731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:59.002736   14731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:59.004318   14731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:59.004809   14731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:59.006430   14731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:59.002105   14731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:59.002736   14731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:59.004318   14731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:59.004809   14731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:59.006430   14731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:01.512529  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:01.523921  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:01.542499  687772 logs.go:282] 0 containers: []
	W1223 00:05:01.542525  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:01.542569  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:01.560824  687772 logs.go:282] 0 containers: []
	W1223 00:05:01.560850  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:01.560892  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:01.578994  687772 logs.go:282] 0 containers: []
	W1223 00:05:01.579017  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:01.579060  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:01.597267  687772 logs.go:282] 0 containers: []
	W1223 00:05:01.597293  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:01.597346  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:01.615860  687772 logs.go:282] 0 containers: []
	W1223 00:05:01.615880  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:01.615919  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:01.635022  687772 logs.go:282] 0 containers: []
	W1223 00:05:01.635045  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:01.635084  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:01.654257  687772 logs.go:282] 0 containers: []
	W1223 00:05:01.654282  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:01.654338  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:01.672470  687772 logs.go:282] 0 containers: []
	W1223 00:05:01.672492  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:01.672502  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:01.672513  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:01.720496  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:01.720525  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:01.740698  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:01.740724  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:01.800538  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:01.793437   14884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:01.794074   14884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:01.795170   14884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:01.795640   14884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:01.797188   14884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:01.793437   14884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:01.794074   14884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:01.795170   14884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:01.795640   14884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:01.797188   14884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:01.800562  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:01.800579  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:01.820265  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:01.820291  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:04.348938  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:04.360190  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:04.379095  687772 logs.go:282] 0 containers: []
	W1223 00:05:04.379124  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:04.379177  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:04.396991  687772 logs.go:282] 0 containers: []
	W1223 00:05:04.397012  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:04.397057  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:04.415658  687772 logs.go:282] 0 containers: []
	W1223 00:05:04.415682  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:04.415750  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:04.434023  687772 logs.go:282] 0 containers: []
	W1223 00:05:04.434049  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:04.434093  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:04.452721  687772 logs.go:282] 0 containers: []
	W1223 00:05:04.452744  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:04.452791  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:04.471221  687772 logs.go:282] 0 containers: []
	W1223 00:05:04.471247  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:04.471294  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:04.489656  687772 logs.go:282] 0 containers: []
	W1223 00:05:04.489685  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:04.489734  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:04.508637  687772 logs.go:282] 0 containers: []
	W1223 00:05:04.508669  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:04.508689  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:04.508702  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:04.526928  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:04.526953  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:04.553896  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:04.553923  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:04.602972  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:04.602999  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:04.622788  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:04.622812  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:04.678232  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:04.670559   15068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:04.671188   15068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:04.672874   15068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:04.673311   15068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:04.675032   15068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:04.670559   15068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:04.671188   15068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:04.672874   15068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:04.673311   15068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:04.675032   15068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:07.179923  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:07.191963  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:07.211239  687772 logs.go:282] 0 containers: []
	W1223 00:05:07.211263  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:07.211304  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:07.230281  687772 logs.go:282] 0 containers: []
	W1223 00:05:07.230302  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:07.230343  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:07.249365  687772 logs.go:282] 0 containers: []
	W1223 00:05:07.249391  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:07.249443  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:07.269410  687772 logs.go:282] 0 containers: []
	W1223 00:05:07.269431  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:07.269484  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:07.288681  687772 logs.go:282] 0 containers: []
	W1223 00:05:07.288711  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:07.288756  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:07.307722  687772 logs.go:282] 0 containers: []
	W1223 00:05:07.307742  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:07.307785  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:07.324479  687772 logs.go:282] 0 containers: []
	W1223 00:05:07.324503  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:07.324557  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:07.343010  687772 logs.go:282] 0 containers: []
	W1223 00:05:07.343030  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:07.343041  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:07.343056  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:07.370090  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:07.370116  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:07.416268  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:07.416294  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:07.436063  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:07.436088  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:07.492624  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:07.485207   15232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:07.485853   15232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:07.487473   15232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:07.488003   15232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:07.489566   15232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:07.485207   15232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:07.485853   15232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:07.487473   15232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:07.488003   15232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:07.489566   15232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:07.492650  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:07.492667  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:10.011735  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:10.025412  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:10.046816  687772 logs.go:282] 0 containers: []
	W1223 00:05:10.046848  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:10.046917  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:10.065664  687772 logs.go:282] 0 containers: []
	W1223 00:05:10.065693  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:10.065752  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:10.084486  687772 logs.go:282] 0 containers: []
	W1223 00:05:10.084512  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:10.084569  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:10.103489  687772 logs.go:282] 0 containers: []
	W1223 00:05:10.103510  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:10.103563  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:10.121383  687772 logs.go:282] 0 containers: []
	W1223 00:05:10.121413  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:10.121457  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:10.139817  687772 logs.go:282] 0 containers: []
	W1223 00:05:10.139840  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:10.139883  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:10.158123  687772 logs.go:282] 0 containers: []
	W1223 00:05:10.158142  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:10.158195  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:10.176690  687772 logs.go:282] 0 containers: []
	W1223 00:05:10.176714  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:10.176728  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:10.176743  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:10.221786  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:10.221818  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:10.241642  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:10.241670  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:10.306092  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:10.298846   15375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:10.299321   15375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:10.300928   15375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:10.301392   15375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:10.302935   15375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:10.298846   15375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:10.299321   15375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:10.300928   15375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:10.301392   15375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:10.302935   15375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:10.306110  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:10.306122  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:10.325227  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:10.325254  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:12.853199  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:12.864559  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:12.883528  687772 logs.go:282] 0 containers: []
	W1223 00:05:12.883553  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:12.883615  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:12.901914  687772 logs.go:282] 0 containers: []
	W1223 00:05:12.901946  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:12.902003  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:12.920676  687772 logs.go:282] 0 containers: []
	W1223 00:05:12.920703  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:12.920746  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:12.938812  687772 logs.go:282] 0 containers: []
	W1223 00:05:12.938840  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:12.938898  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:12.956564  687772 logs.go:282] 0 containers: []
	W1223 00:05:12.956588  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:12.956651  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:12.975030  687772 logs.go:282] 0 containers: []
	W1223 00:05:12.975056  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:12.975112  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:12.992748  687772 logs.go:282] 0 containers: []
	W1223 00:05:12.992770  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:12.992819  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:13.013710  687772 logs.go:282] 0 containers: []
	W1223 00:05:13.013733  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:13.013744  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:13.013756  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:13.044889  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:13.044920  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:13.090565  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:13.090611  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:13.110578  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:13.110614  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:13.166048  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:13.158806   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:13.159417   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:13.161011   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:13.161480   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:13.163044   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:13.158806   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:13.159417   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:13.161011   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:13.161480   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:13.163044   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:13.166066  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:13.166079  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:15.685941  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:15.697434  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:15.716560  687772 logs.go:282] 0 containers: []
	W1223 00:05:15.716607  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:15.716664  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:15.735775  687772 logs.go:282] 0 containers: []
	W1223 00:05:15.735799  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:15.735847  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:15.753974  687772 logs.go:282] 0 containers: []
	W1223 00:05:15.753996  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:15.754046  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:15.771763  687772 logs.go:282] 0 containers: []
	W1223 00:05:15.771788  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:15.771846  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:15.790222  687772 logs.go:282] 0 containers: []
	W1223 00:05:15.790249  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:15.790294  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:15.808671  687772 logs.go:282] 0 containers: []
	W1223 00:05:15.808691  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:15.808735  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:15.827295  687772 logs.go:282] 0 containers: []
	W1223 00:05:15.827324  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:15.827377  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:15.845637  687772 logs.go:282] 0 containers: []
	W1223 00:05:15.845658  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:15.845668  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:15.845679  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:15.892975  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:15.893004  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:15.912599  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:15.912626  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:15.967763  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:15.960925   15710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:15.961478   15710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:15.963006   15710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:15.963401   15710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:15.964895   15710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:15.960925   15710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:15.961478   15710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:15.963006   15710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:15.963401   15710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:15.964895   15710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:15.967788  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:15.967801  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:15.986603  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:15.986632  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:18.516732  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:18.529415  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:18.549048  687772 logs.go:282] 0 containers: []
	W1223 00:05:18.549069  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:18.549113  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:18.567672  687772 logs.go:282] 0 containers: []
	W1223 00:05:18.567705  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:18.567771  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:18.586513  687772 logs.go:282] 0 containers: []
	W1223 00:05:18.586538  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:18.586613  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:18.604518  687772 logs.go:282] 0 containers: []
	W1223 00:05:18.604538  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:18.604579  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:18.623446  687772 logs.go:282] 0 containers: []
	W1223 00:05:18.623467  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:18.623510  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:18.642213  687772 logs.go:282] 0 containers: []
	W1223 00:05:18.642230  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:18.642279  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:18.660501  687772 logs.go:282] 0 containers: []
	W1223 00:05:18.660521  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:18.660563  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:18.678846  687772 logs.go:282] 0 containers: []
	W1223 00:05:18.678869  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:18.678882  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:18.678893  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:18.727936  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:18.727965  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:18.749033  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:18.749059  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:18.804351  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:18.796992   15875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:18.797493   15875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:18.799074   15875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:18.799516   15875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:18.801045   15875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:18.796992   15875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:18.797493   15875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:18.799074   15875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:18.799516   15875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:18.801045   15875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:18.804386  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:18.804401  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:18.822650  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:18.822681  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:21.351938  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:21.363094  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:21.382091  687772 logs.go:282] 0 containers: []
	W1223 00:05:21.382123  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:21.382179  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:21.400790  687772 logs.go:282] 0 containers: []
	W1223 00:05:21.400813  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:21.400861  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:21.418989  687772 logs.go:282] 0 containers: []
	W1223 00:05:21.419014  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:21.419060  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:21.437814  687772 logs.go:282] 0 containers: []
	W1223 00:05:21.437839  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:21.437898  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:21.456967  687772 logs.go:282] 0 containers: []
	W1223 00:05:21.456991  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:21.457045  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:21.475541  687772 logs.go:282] 0 containers: []
	W1223 00:05:21.475566  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:21.475644  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:21.494493  687772 logs.go:282] 0 containers: []
	W1223 00:05:21.494518  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:21.494576  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:21.513952  687772 logs.go:282] 0 containers: []
	W1223 00:05:21.513979  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:21.513990  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:21.514001  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:21.563253  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:21.563283  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:21.583663  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:21.583693  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:21.638754  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:21.631703   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:21.632235   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:21.633835   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:21.634263   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:21.635800   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:21.631703   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:21.632235   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:21.633835   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:21.634263   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:21.635800   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:21.638774  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:21.638786  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:21.657674  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:21.657704  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:24.188905  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:24.200277  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:24.220108  687772 logs.go:282] 0 containers: []
	W1223 00:05:24.220133  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:24.220188  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:24.240286  687772 logs.go:282] 0 containers: []
	W1223 00:05:24.240307  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:24.240351  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:24.260644  687772 logs.go:282] 0 containers: []
	W1223 00:05:24.260670  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:24.260724  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:24.282918  687772 logs.go:282] 0 containers: []
	W1223 00:05:24.282943  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:24.282990  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:24.302929  687772 logs.go:282] 0 containers: []
	W1223 00:05:24.302956  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:24.303013  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:24.322124  687772 logs.go:282] 0 containers: []
	W1223 00:05:24.322145  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:24.322196  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:24.340965  687772 logs.go:282] 0 containers: []
	W1223 00:05:24.340993  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:24.341050  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:24.360121  687772 logs.go:282] 0 containers: []
	W1223 00:05:24.360148  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:24.360162  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:24.360177  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:24.406776  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:24.406809  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:24.428882  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:24.428909  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:24.484257  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:24.477184   16205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:24.477734   16205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:24.479261   16205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:24.479752   16205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:24.481241   16205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:24.477184   16205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:24.477734   16205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:24.479261   16205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:24.479752   16205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:24.481241   16205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:24.484286  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:24.484304  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:24.504724  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:24.504752  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:27.038561  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:27.050259  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:27.069265  687772 logs.go:282] 0 containers: []
	W1223 00:05:27.069288  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:27.069333  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:27.088081  687772 logs.go:282] 0 containers: []
	W1223 00:05:27.088108  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:27.088171  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:27.107172  687772 logs.go:282] 0 containers: []
	W1223 00:05:27.107198  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:27.107246  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:27.125773  687772 logs.go:282] 0 containers: []
	W1223 00:05:27.125804  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:27.125862  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:27.144259  687772 logs.go:282] 0 containers: []
	W1223 00:05:27.144282  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:27.144339  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:27.163197  687772 logs.go:282] 0 containers: []
	W1223 00:05:27.163217  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:27.163263  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:27.181942  687772 logs.go:282] 0 containers: []
	W1223 00:05:27.181971  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:27.182030  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:27.199936  687772 logs.go:282] 0 containers: []
	W1223 00:05:27.199964  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:27.199980  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:27.199996  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:27.218431  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:27.218456  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:27.246756  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:27.246783  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:27.297557  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:27.297603  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:27.318177  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:27.318205  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:27.374968  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:27.367760   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:27.368359   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:27.369972   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:27.370370   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:27.371924   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:27.367760   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:27.368359   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:27.369972   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:27.370370   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:27.371924   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:29.875712  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:29.887100  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:29.906809  687772 logs.go:282] 0 containers: []
	W1223 00:05:29.906834  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:29.906892  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:29.926388  687772 logs.go:282] 0 containers: []
	W1223 00:05:29.926414  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:29.926467  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:29.946220  687772 logs.go:282] 0 containers: []
	W1223 00:05:29.946248  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:29.946302  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:29.967102  687772 logs.go:282] 0 containers: []
	W1223 00:05:29.967131  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:29.967188  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:29.986540  687772 logs.go:282] 0 containers: []
	W1223 00:05:29.986564  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:29.986631  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:30.004809  687772 logs.go:282] 0 containers: []
	W1223 00:05:30.004835  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:30.004881  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:30.023625  687772 logs.go:282] 0 containers: []
	W1223 00:05:30.023655  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:30.023711  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:30.042067  687772 logs.go:282] 0 containers: []
	W1223 00:05:30.042089  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:30.042100  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:30.042120  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:30.061885  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:30.061913  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:30.090401  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:30.090432  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:30.138962  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:30.138993  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:30.159224  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:30.159250  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:30.216295  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:30.208699   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:30.209372   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:30.211074   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:30.211516   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:30.213098   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:30.208699   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:30.209372   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:30.211074   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:30.211516   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:30.213098   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:32.716974  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:32.728432  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:32.748217  687772 logs.go:282] 0 containers: []
	W1223 00:05:32.748245  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:32.748292  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:32.767866  687772 logs.go:282] 0 containers: []
	W1223 00:05:32.767887  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:32.767935  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:32.788690  687772 logs.go:282] 0 containers: []
	W1223 00:05:32.788723  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:32.788782  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:32.808366  687772 logs.go:282] 0 containers: []
	W1223 00:05:32.808397  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:32.808460  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:32.827631  687772 logs.go:282] 0 containers: []
	W1223 00:05:32.827655  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:32.827714  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:32.846429  687772 logs.go:282] 0 containers: []
	W1223 00:05:32.846456  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:32.846511  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:32.865177  687772 logs.go:282] 0 containers: []
	W1223 00:05:32.865202  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:32.865258  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:32.885235  687772 logs.go:282] 0 containers: []
	W1223 00:05:32.885258  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:32.885268  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:32.885280  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:32.905218  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:32.905245  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:32.960860  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:32.953652   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:32.954228   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:32.955802   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:32.956269   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:32.957894   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:32.953652   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:32.954228   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:32.955802   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:32.956269   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:32.957894   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:32.960885  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:32.960905  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:32.979917  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:32.979943  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:33.008187  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:33.008218  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:35.555359  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:35.566888  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:35.586562  687772 logs.go:282] 0 containers: []
	W1223 00:05:35.586588  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:35.586657  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:35.605495  687772 logs.go:282] 0 containers: []
	W1223 00:05:35.605522  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:35.605579  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:35.624671  687772 logs.go:282] 0 containers: []
	W1223 00:05:35.624700  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:35.624760  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:35.643198  687772 logs.go:282] 0 containers: []
	W1223 00:05:35.643222  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:35.643278  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:35.662223  687772 logs.go:282] 0 containers: []
	W1223 00:05:35.662245  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:35.662290  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:35.681991  687772 logs.go:282] 0 containers: []
	W1223 00:05:35.682016  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:35.682071  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:35.700985  687772 logs.go:282] 0 containers: []
	W1223 00:05:35.701009  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:35.701062  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:35.719976  687772 logs.go:282] 0 containers: []
	W1223 00:05:35.720000  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:35.720015  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:35.720029  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:35.767694  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:35.767728  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:35.792896  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:35.792935  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:35.849448  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:35.842971   16872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:35.843476   16872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:35.845024   16872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:35.845404   16872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:35.846511   16872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:35.842971   16872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:35.843476   16872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:35.845024   16872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:35.845404   16872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:35.846511   16872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:35.849470  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:35.849491  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:35.868248  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:35.868274  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:38.397175  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:38.408856  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:38.428054  687772 logs.go:282] 0 containers: []
	W1223 00:05:38.428085  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:38.428141  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:38.447350  687772 logs.go:282] 0 containers: []
	W1223 00:05:38.447376  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:38.447428  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:38.466426  687772 logs.go:282] 0 containers: []
	W1223 00:05:38.466455  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:38.466512  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:38.486074  687772 logs.go:282] 0 containers: []
	W1223 00:05:38.486104  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:38.486173  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:38.505584  687772 logs.go:282] 0 containers: []
	W1223 00:05:38.505626  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:38.505709  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:38.527387  687772 logs.go:282] 0 containers: []
	W1223 00:05:38.527416  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:38.527473  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:38.547928  687772 logs.go:282] 0 containers: []
	W1223 00:05:38.547955  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:38.548015  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:38.568237  687772 logs.go:282] 0 containers: []
	W1223 00:05:38.568262  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:38.568274  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:38.568285  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:38.616522  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:38.616555  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:38.638676  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:38.638707  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:38.694984  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:38.687773   17030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:38.688337   17030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:38.689839   17030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:38.690288   17030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:38.691876   17030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:38.687773   17030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:38.688337   17030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:38.689839   17030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:38.690288   17030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:38.691876   17030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:38.695006  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:38.695019  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:38.713940  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:38.713969  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:41.244859  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:41.256283  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:41.275201  687772 logs.go:282] 0 containers: []
	W1223 00:05:41.275233  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:41.275280  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:41.295272  687772 logs.go:282] 0 containers: []
	W1223 00:05:41.295299  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:41.295353  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:41.313039  687772 logs.go:282] 0 containers: []
	W1223 00:05:41.313069  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:41.313135  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:41.331394  687772 logs.go:282] 0 containers: []
	W1223 00:05:41.331418  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:41.331491  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:41.350556  687772 logs.go:282] 0 containers: []
	W1223 00:05:41.350583  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:41.350650  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:41.369215  687772 logs.go:282] 0 containers: []
	W1223 00:05:41.369242  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:41.369290  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:41.387799  687772 logs.go:282] 0 containers: []
	W1223 00:05:41.387826  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:41.387877  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:41.406760  687772 logs.go:282] 0 containers: []
	W1223 00:05:41.406785  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:41.406799  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:41.406813  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:41.453518  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:41.453548  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:41.473671  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:41.473700  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:41.531098  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:41.523365   17203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:41.523912   17203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:41.525536   17203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:41.526073   17203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:41.527560   17203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:41.523365   17203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:41.523912   17203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:41.525536   17203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:41.526073   17203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:41.527560   17203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:41.531124  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:41.531139  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:41.551968  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:41.551997  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:44.081115  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:44.092382  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:44.111299  687772 logs.go:282] 0 containers: []
	W1223 00:05:44.111326  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:44.111381  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:44.130168  687772 logs.go:282] 0 containers: []
	W1223 00:05:44.130196  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:44.130250  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:44.149028  687772 logs.go:282] 0 containers: []
	W1223 00:05:44.149052  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:44.149109  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:44.167326  687772 logs.go:282] 0 containers: []
	W1223 00:05:44.167346  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:44.167388  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:44.185875  687772 logs.go:282] 0 containers: []
	W1223 00:05:44.185898  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:44.185949  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:44.205297  687772 logs.go:282] 0 containers: []
	W1223 00:05:44.205320  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:44.205370  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:44.224561  687772 logs.go:282] 0 containers: []
	W1223 00:05:44.224608  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:44.224661  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:44.242760  687772 logs.go:282] 0 containers: []
	W1223 00:05:44.242782  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:44.242795  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:44.242808  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:44.290363  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:44.290399  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:44.310780  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:44.310806  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:44.367913  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:44.360501   17368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:44.361124   17368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:44.362755   17368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:44.363237   17368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:44.364761   17368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:44.360501   17368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:44.361124   17368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:44.362755   17368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:44.363237   17368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:44.364761   17368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:44.367931  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:44.367945  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:44.387052  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:44.387080  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:46.916305  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:46.927926  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:46.946856  687772 logs.go:282] 0 containers: []
	W1223 00:05:46.946882  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:46.946941  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:46.965651  687772 logs.go:282] 0 containers: []
	W1223 00:05:46.965674  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:46.965720  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:46.984835  687772 logs.go:282] 0 containers: []
	W1223 00:05:46.984863  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:46.984920  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:47.005005  687772 logs.go:282] 0 containers: []
	W1223 00:05:47.005033  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:47.005095  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:47.026916  687772 logs.go:282] 0 containers: []
	W1223 00:05:47.026948  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:47.026996  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:47.047971  687772 logs.go:282] 0 containers: []
	W1223 00:05:47.048003  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:47.048064  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:47.067344  687772 logs.go:282] 0 containers: []
	W1223 00:05:47.067372  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:47.067424  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:47.087055  687772 logs.go:282] 0 containers: []
	W1223 00:05:47.087079  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:47.087093  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:47.087107  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:47.134052  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:47.134085  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:47.154446  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:47.154479  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:47.210710  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:47.203541   17534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:47.204152   17534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:47.205769   17534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:47.206170   17534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:47.207683   17534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:47.203541   17534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:47.204152   17534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:47.205769   17534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:47.206170   17534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:47.207683   17534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:47.210734  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:47.210746  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:47.230988  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:47.231017  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:49.759465  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:49.771325  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:49.791131  687772 logs.go:282] 0 containers: []
	W1223 00:05:49.791160  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:49.791219  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:49.810792  687772 logs.go:282] 0 containers: []
	W1223 00:05:49.810814  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:49.810859  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:49.829432  687772 logs.go:282] 0 containers: []
	W1223 00:05:49.829454  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:49.829499  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:49.847527  687772 logs.go:282] 0 containers: []
	W1223 00:05:49.847548  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:49.847603  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:49.866252  687772 logs.go:282] 0 containers: []
	W1223 00:05:49.866275  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:49.866315  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:49.885934  687772 logs.go:282] 0 containers: []
	W1223 00:05:49.885955  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:49.885996  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:49.903668  687772 logs.go:282] 0 containers: []
	W1223 00:05:49.903690  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:49.903733  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:49.923276  687772 logs.go:282] 0 containers: []
	W1223 00:05:49.923298  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:49.923309  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:49.923320  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:49.968185  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:49.968217  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:49.988993  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:49.989021  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:50.052060  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:50.045040   17695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:50.045626   17695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:50.047194   17695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:50.047655   17695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:50.049139   17695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:50.045040   17695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:50.045626   17695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:50.047194   17695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:50.047655   17695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:50.049139   17695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:50.052083  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:50.052100  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:50.070860  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:50.070885  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:52.599679  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:52.611289  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:52.629699  687772 logs.go:282] 0 containers: []
	W1223 00:05:52.629724  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:52.629782  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:52.648660  687772 logs.go:282] 0 containers: []
	W1223 00:05:52.648689  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:52.648740  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:52.667204  687772 logs.go:282] 0 containers: []
	W1223 00:05:52.667232  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:52.667287  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:52.685635  687772 logs.go:282] 0 containers: []
	W1223 00:05:52.685667  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:52.685718  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:52.703669  687772 logs.go:282] 0 containers: []
	W1223 00:05:52.703692  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:52.703742  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:52.721467  687772 logs.go:282] 0 containers: []
	W1223 00:05:52.721495  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:52.721553  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:52.739858  687772 logs.go:282] 0 containers: []
	W1223 00:05:52.739885  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:52.739930  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:52.759123  687772 logs.go:282] 0 containers: []
	W1223 00:05:52.759151  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:52.759165  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:52.759178  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:52.812520  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:52.812552  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:52.832551  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:52.832578  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:52.887680  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:52.880327   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:52.880960   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:52.882578   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:52.883148   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:52.884700   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:52.880327   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:52.880960   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:52.882578   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:52.883148   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:52.884700   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:52.887700  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:52.887719  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:52.906246  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:52.906276  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:55.444344  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:55.455763  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:55.475305  687772 logs.go:282] 0 containers: []
	W1223 00:05:55.475332  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:55.475389  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:55.494094  687772 logs.go:282] 0 containers: []
	W1223 00:05:55.494117  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:55.494164  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:55.511874  687772 logs.go:282] 0 containers: []
	W1223 00:05:55.511896  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:55.511942  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:55.530088  687772 logs.go:282] 0 containers: []
	W1223 00:05:55.530113  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:55.530159  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:55.548749  687772 logs.go:282] 0 containers: []
	W1223 00:05:55.548778  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:55.548828  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:55.567179  687772 logs.go:282] 0 containers: []
	W1223 00:05:55.567204  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:55.567269  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:55.586315  687772 logs.go:282] 0 containers: []
	W1223 00:05:55.586343  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:55.586395  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:55.605282  687772 logs.go:282] 0 containers: []
	W1223 00:05:55.605303  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:55.605314  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:55.605327  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:55.624085  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:55.624113  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:55.652038  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:55.652065  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:55.699247  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:55.699274  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:55.719031  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:55.719058  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:55.777078  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:55.769272   18055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:55.769828   18055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:55.771491   18055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:55.771926   18055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:55.773469   18055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:55.769272   18055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:55.769828   18055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:55.771491   18055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:55.771926   18055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:55.773469   18055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:58.278708  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:58.291024  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:58.310944  687772 logs.go:282] 0 containers: []
	W1223 00:05:58.310971  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:58.311027  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:58.329419  687772 logs.go:282] 0 containers: []
	W1223 00:05:58.329443  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:58.329499  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:58.346556  687772 logs.go:282] 0 containers: []
	W1223 00:05:58.346579  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:58.346653  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:58.364565  687772 logs.go:282] 0 containers: []
	W1223 00:05:58.364601  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:58.364653  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:58.383020  687772 logs.go:282] 0 containers: []
	W1223 00:05:58.383043  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:58.383089  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:58.401354  687772 logs.go:282] 0 containers: []
	W1223 00:05:58.401381  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:58.401440  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:58.419356  687772 logs.go:282] 0 containers: []
	W1223 00:05:58.419377  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:58.419426  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:58.438428  687772 logs.go:282] 0 containers: []
	W1223 00:05:58.438449  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:58.438461  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:58.438477  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:58.458325  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:58.458353  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:58.513127  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:58.506001   18204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:58.506523   18204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:58.508086   18204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:58.508549   18204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:58.510071   18204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:58.506001   18204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:58.506523   18204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:58.508086   18204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:58.508549   18204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:58.510071   18204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:58.513156  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:58.513173  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:58.532159  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:58.532183  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:58.559409  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:58.559433  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:01.105933  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:01.117378  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:01.136395  687772 logs.go:282] 0 containers: []
	W1223 00:06:01.136418  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:01.136463  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:01.155037  687772 logs.go:282] 0 containers: []
	W1223 00:06:01.155063  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:01.155111  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:01.173939  687772 logs.go:282] 0 containers: []
	W1223 00:06:01.173960  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:01.174004  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:01.193250  687772 logs.go:282] 0 containers: []
	W1223 00:06:01.193271  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:01.193312  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:01.210927  687772 logs.go:282] 0 containers: []
	W1223 00:06:01.210948  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:01.210990  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:01.229293  687772 logs.go:282] 0 containers: []
	W1223 00:06:01.229319  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:01.229367  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:01.247971  687772 logs.go:282] 0 containers: []
	W1223 00:06:01.247997  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:01.248059  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:01.267642  687772 logs.go:282] 0 containers: []
	W1223 00:06:01.267667  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:01.267688  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:01.267718  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:01.290552  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:01.290581  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:01.346096  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:01.339164   18374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:01.339647   18374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:01.341218   18374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:01.341667   18374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:01.343153   18374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:01.339164   18374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:01.339647   18374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:01.341218   18374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:01.341667   18374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:01.343153   18374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:01.346115  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:01.346127  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:01.364490  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:01.364516  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:01.391895  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:01.391918  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:03.938979  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:03.950393  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:03.969334  687772 logs.go:282] 0 containers: []
	W1223 00:06:03.969364  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:03.969448  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:03.988183  687772 logs.go:282] 0 containers: []
	W1223 00:06:03.988205  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:03.988252  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:04.007742  687772 logs.go:282] 0 containers: []
	W1223 00:06:04.007767  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:04.007821  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:04.027502  687772 logs.go:282] 0 containers: []
	W1223 00:06:04.027528  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:04.027582  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:04.048194  687772 logs.go:282] 0 containers: []
	W1223 00:06:04.048222  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:04.048286  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:04.067020  687772 logs.go:282] 0 containers: []
	W1223 00:06:04.067044  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:04.067096  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:04.085747  687772 logs.go:282] 0 containers: []
	W1223 00:06:04.085776  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:04.085829  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:04.103906  687772 logs.go:282] 0 containers: []
	W1223 00:06:04.103936  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:04.103950  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:04.103963  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:04.131404  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:04.131427  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:04.178862  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:04.178893  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:04.198797  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:04.198823  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:04.255150  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:04.247324   18547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:04.247911   18547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:04.249519   18547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:04.249945   18547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:04.251469   18547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:04.247324   18547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:04.247911   18547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:04.249519   18547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:04.249945   18547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:04.251469   18547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:04.255174  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:04.255190  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:06.777149  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:06.788444  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:06.807818  687772 logs.go:282] 0 containers: []
	W1223 00:06:06.807839  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:06.807881  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:06.827018  687772 logs.go:282] 0 containers: []
	W1223 00:06:06.827044  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:06.827092  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:06.845320  687772 logs.go:282] 0 containers: []
	W1223 00:06:06.845342  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:06.845395  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:06.862837  687772 logs.go:282] 0 containers: []
	W1223 00:06:06.862856  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:06.862907  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:06.880629  687772 logs.go:282] 0 containers: []
	W1223 00:06:06.880649  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:06.880690  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:06.898665  687772 logs.go:282] 0 containers: []
	W1223 00:06:06.898694  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:06.898762  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:06.916571  687772 logs.go:282] 0 containers: []
	W1223 00:06:06.916606  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:06.916662  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:06.934190  687772 logs.go:282] 0 containers: []
	W1223 00:06:06.934213  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:06.934228  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:06.934245  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:06.961869  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:06.961895  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:07.008426  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:07.008460  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:07.033602  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:07.033641  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:07.089432  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:07.082227   18715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:07.082831   18715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:07.084421   18715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:07.084867   18715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:07.086345   18715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:07.082227   18715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:07.082831   18715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:07.084421   18715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:07.084867   18715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:07.086345   18715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:07.089452  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:07.089463  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:09.608089  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:09.619510  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:09.638402  687772 logs.go:282] 0 containers: []
	W1223 00:06:09.638426  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:09.638473  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:09.657218  687772 logs.go:282] 0 containers: []
	W1223 00:06:09.657247  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:09.657292  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:09.675838  687772 logs.go:282] 0 containers: []
	W1223 00:06:09.675871  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:09.675935  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:09.694913  687772 logs.go:282] 0 containers: []
	W1223 00:06:09.694939  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:09.694992  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:09.714024  687772 logs.go:282] 0 containers: []
	W1223 00:06:09.714046  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:09.714097  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:09.733120  687772 logs.go:282] 0 containers: []
	W1223 00:06:09.733142  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:09.733188  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:09.752081  687772 logs.go:282] 0 containers: []
	W1223 00:06:09.752104  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:09.752148  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:09.770630  687772 logs.go:282] 0 containers: []
	W1223 00:06:09.770661  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:09.770676  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:09.770700  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:09.818931  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:09.818967  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:09.839282  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:09.839309  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:09.895206  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:09.888285   18867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:09.888810   18867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:09.890309   18867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:09.890779   18867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:09.891942   18867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:09.888285   18867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:09.888810   18867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:09.890309   18867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:09.890779   18867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:09.891942   18867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:09.895234  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:09.895247  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:09.913965  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:09.913994  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:12.442178  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:12.453355  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:12.472243  687772 logs.go:282] 0 containers: []
	W1223 00:06:12.472267  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:12.472312  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:12.491113  687772 logs.go:282] 0 containers: []
	W1223 00:06:12.491136  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:12.491192  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:12.511291  687772 logs.go:282] 0 containers: []
	W1223 00:06:12.511317  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:12.511376  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:12.532112  687772 logs.go:282] 0 containers: []
	W1223 00:06:12.532141  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:12.532196  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:12.551226  687772 logs.go:282] 0 containers: []
	W1223 00:06:12.551250  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:12.551293  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:12.569426  687772 logs.go:282] 0 containers: []
	W1223 00:06:12.569449  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:12.569504  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:12.588494  687772 logs.go:282] 0 containers: []
	W1223 00:06:12.588520  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:12.588569  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:12.606610  687772 logs.go:282] 0 containers: []
	W1223 00:06:12.606644  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:12.606657  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:12.606674  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:12.634113  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:12.634143  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:12.681112  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:12.681140  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:12.700711  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:12.700736  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:12.757239  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:12.749780   19051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:12.750485   19051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:12.752070   19051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:12.752530   19051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:12.754079   19051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:12.749780   19051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:12.750485   19051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:12.752070   19051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:12.752530   19051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:12.754079   19051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:12.757259  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:12.757273  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:15.278124  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:15.290283  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:15.309406  687772 logs.go:282] 0 containers: []
	W1223 00:06:15.309433  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:15.309481  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:15.328093  687772 logs.go:282] 0 containers: []
	W1223 00:06:15.328119  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:15.328173  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:15.346922  687772 logs.go:282] 0 containers: []
	W1223 00:06:15.346949  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:15.347006  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:15.364932  687772 logs.go:282] 0 containers: []
	W1223 00:06:15.364960  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:15.365013  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:15.383120  687772 logs.go:282] 0 containers: []
	W1223 00:06:15.383144  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:15.383188  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:15.401332  687772 logs.go:282] 0 containers: []
	W1223 00:06:15.401355  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:15.401404  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:15.419961  687772 logs.go:282] 0 containers: []
	W1223 00:06:15.419986  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:15.420037  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:15.438746  687772 logs.go:282] 0 containers: []
	W1223 00:06:15.438769  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:15.438780  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:15.438793  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:15.486016  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:15.486044  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:15.506911  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:15.506939  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:15.566808  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:15.559320   19201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:15.559937   19201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:15.561686   19201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:15.562103   19201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:15.563650   19201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:15.559320   19201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:15.559937   19201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:15.561686   19201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:15.562103   19201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:15.563650   19201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:15.566826  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:15.566836  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:15.586013  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:15.586040  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:18.115753  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:18.127221  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:18.146018  687772 logs.go:282] 0 containers: []
	W1223 00:06:18.146048  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:18.146094  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:18.165274  687772 logs.go:282] 0 containers: []
	W1223 00:06:18.165294  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:18.165337  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:18.183880  687772 logs.go:282] 0 containers: []
	W1223 00:06:18.183904  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:18.183947  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:18.202061  687772 logs.go:282] 0 containers: []
	W1223 00:06:18.202082  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:18.202130  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:18.219858  687772 logs.go:282] 0 containers: []
	W1223 00:06:18.219892  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:18.219945  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:18.238966  687772 logs.go:282] 0 containers: []
	W1223 00:06:18.238987  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:18.239032  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:18.260921  687772 logs.go:282] 0 containers: []
	W1223 00:06:18.260949  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:18.260997  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:18.280705  687772 logs.go:282] 0 containers: []
	W1223 00:06:18.280735  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:18.280750  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:18.280764  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:18.299732  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:18.299756  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:18.327603  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:18.327631  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:18.375722  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:18.375749  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:18.397572  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:18.397611  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:18.454135  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:18.447039   19376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:18.447614   19376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:18.449142   19376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:18.449559   19376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:18.451077   19376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:18.447039   19376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:18.447614   19376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:18.449142   19376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:18.449559   19376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:18.451077   19376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:20.955833  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:20.967309  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:20.986237  687772 logs.go:282] 0 containers: []
	W1223 00:06:20.986258  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:20.986301  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:21.004350  687772 logs.go:282] 0 containers: []
	W1223 00:06:21.004377  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:21.004434  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:21.022893  687772 logs.go:282] 0 containers: []
	W1223 00:06:21.022919  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:21.022974  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:21.042421  687772 logs.go:282] 0 containers: []
	W1223 00:06:21.042441  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:21.042484  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:21.061267  687772 logs.go:282] 0 containers: []
	W1223 00:06:21.061293  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:21.061355  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:21.079988  687772 logs.go:282] 0 containers: []
	W1223 00:06:21.080011  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:21.080064  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:21.098196  687772 logs.go:282] 0 containers: []
	W1223 00:06:21.098225  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:21.098279  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:21.117158  687772 logs.go:282] 0 containers: []
	W1223 00:06:21.117180  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:21.117191  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:21.117202  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:21.146189  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:21.146215  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:21.192645  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:21.192677  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:21.212689  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:21.212716  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:21.269438  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:21.261783   19545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:21.262320   19545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:21.263990   19545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:21.264498   19545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:21.266022   19545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:21.261783   19545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:21.262320   19545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:21.263990   19545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:21.264498   19545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:21.266022   19545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:21.269462  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:21.269480  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:23.789716  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:23.801130  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:23.820155  687772 logs.go:282] 0 containers: []
	W1223 00:06:23.820180  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:23.820239  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:23.838850  687772 logs.go:282] 0 containers: []
	W1223 00:06:23.838875  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:23.838919  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:23.856860  687772 logs.go:282] 0 containers: []
	W1223 00:06:23.856881  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:23.856931  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:23.874630  687772 logs.go:282] 0 containers: []
	W1223 00:06:23.874653  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:23.874700  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:23.893425  687772 logs.go:282] 0 containers: []
	W1223 00:06:23.893454  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:23.893521  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:23.912712  687772 logs.go:282] 0 containers: []
	W1223 00:06:23.912734  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:23.912789  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:23.931097  687772 logs.go:282] 0 containers: []
	W1223 00:06:23.931124  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:23.931178  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:23.949113  687772 logs.go:282] 0 containers: []
	W1223 00:06:23.949138  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:23.949152  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:23.949168  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:23.996109  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:23.996137  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:24.016228  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:24.016254  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:24.071647  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:24.064286   19696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:24.064800   19696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:24.066333   19696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:24.066786   19696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:24.068314   19696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:24.064286   19696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:24.064800   19696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:24.066333   19696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:24.066786   19696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:24.068314   19696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:24.071665  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:24.071680  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:24.090918  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:24.090944  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:26.624354  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:26.635840  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:26.654444  687772 logs.go:282] 0 containers: []
	W1223 00:06:26.654473  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:26.654537  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:26.673364  687772 logs.go:282] 0 containers: []
	W1223 00:06:26.673388  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:26.673436  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:26.692467  687772 logs.go:282] 0 containers: []
	W1223 00:06:26.692489  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:26.692539  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:26.711627  687772 logs.go:282] 0 containers: []
	W1223 00:06:26.711656  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:26.711709  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:26.730302  687772 logs.go:282] 0 containers: []
	W1223 00:06:26.730332  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:26.730386  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:26.748910  687772 logs.go:282] 0 containers: []
	W1223 00:06:26.748939  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:26.748995  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:26.768525  687772 logs.go:282] 0 containers: []
	W1223 00:06:26.768548  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:26.768603  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:26.788434  687772 logs.go:282] 0 containers: []
	W1223 00:06:26.788462  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:26.788476  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:26.788491  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:26.845463  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:26.838499   19858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:26.838989   19858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:26.840494   19858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:26.840922   19858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:26.842389   19858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:26.838499   19858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:26.838989   19858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:26.840494   19858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:26.840922   19858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:26.842389   19858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:26.845482  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:26.845494  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:26.864140  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:26.864167  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:26.890448  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:26.890476  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:26.937390  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:26.937422  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:29.457766  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:29.469205  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:29.488353  687772 logs.go:282] 0 containers: []
	W1223 00:06:29.488376  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:29.488431  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:29.508035  687772 logs.go:282] 0 containers: []
	W1223 00:06:29.508059  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:29.508114  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:29.528210  687772 logs.go:282] 0 containers: []
	W1223 00:06:29.528234  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:29.528280  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:29.546344  687772 logs.go:282] 0 containers: []
	W1223 00:06:29.546370  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:29.546432  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:29.565125  687772 logs.go:282] 0 containers: []
	W1223 00:06:29.565153  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:29.565200  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:29.584111  687772 logs.go:282] 0 containers: []
	W1223 00:06:29.584142  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:29.584195  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:29.602714  687772 logs.go:282] 0 containers: []
	W1223 00:06:29.602735  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:29.602778  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:29.621012  687772 logs.go:282] 0 containers: []
	W1223 00:06:29.621042  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:29.621058  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:29.621073  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:29.669132  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:29.669168  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:29.689406  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:29.689431  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:29.746681  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:29.739833   20022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:29.740362   20022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:29.741927   20022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:29.742376   20022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:29.743569   20022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:29.739833   20022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:29.740362   20022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:29.741927   20022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:29.742376   20022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:29.743569   20022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:29.746703  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:29.746720  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:29.765762  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:29.765793  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:32.299443  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:32.310848  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:32.330298  687772 logs.go:282] 0 containers: []
	W1223 00:06:32.330326  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:32.330380  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:32.349664  687772 logs.go:282] 0 containers: []
	W1223 00:06:32.349692  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:32.349745  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:32.367944  687772 logs.go:282] 0 containers: []
	W1223 00:06:32.367969  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:32.368081  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:32.386919  687772 logs.go:282] 0 containers: []
	W1223 00:06:32.386940  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:32.386983  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:32.405416  687772 logs.go:282] 0 containers: []
	W1223 00:06:32.405440  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:32.405487  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:32.423080  687772 logs.go:282] 0 containers: []
	W1223 00:06:32.423100  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:32.423144  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:32.441255  687772 logs.go:282] 0 containers: []
	W1223 00:06:32.441282  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:32.441336  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:32.459763  687772 logs.go:282] 0 containers: []
	W1223 00:06:32.459789  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:32.459801  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:32.459812  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:32.507284  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:32.507314  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:32.529983  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:32.530014  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:32.587816  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:32.580635   20192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:32.581177   20192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:32.582743   20192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:32.583222   20192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:32.584764   20192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:32.580635   20192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:32.581177   20192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:32.582743   20192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:32.583222   20192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:32.584764   20192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:32.587843  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:32.587860  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:32.607796  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:32.607826  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:35.136489  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:35.147976  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:35.166774  687772 logs.go:282] 0 containers: []
	W1223 00:06:35.166794  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:35.166846  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:35.185872  687772 logs.go:282] 0 containers: []
	W1223 00:06:35.185899  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:35.185949  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:35.204053  687772 logs.go:282] 0 containers: []
	W1223 00:06:35.204074  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:35.204115  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:35.223056  687772 logs.go:282] 0 containers: []
	W1223 00:06:35.223077  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:35.223126  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:35.241616  687772 logs.go:282] 0 containers: []
	W1223 00:06:35.241645  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:35.241699  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:35.260422  687772 logs.go:282] 0 containers: []
	W1223 00:06:35.260476  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:35.260536  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:35.279168  687772 logs.go:282] 0 containers: []
	W1223 00:06:35.279192  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:35.279238  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:35.297208  687772 logs.go:282] 0 containers: []
	W1223 00:06:35.297236  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:35.297252  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:35.297267  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:35.317273  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:35.317299  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:35.374319  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:35.365790   20361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:35.367609   20361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:35.368076   20361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:35.369665   20361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:35.370105   20361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:35.365790   20361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:35.367609   20361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:35.368076   20361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:35.369665   20361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:35.370105   20361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:35.374337  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:35.374349  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:35.393025  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:35.393050  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:35.420499  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:35.420537  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:37.968117  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:37.979448  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:37.998789  687772 logs.go:282] 0 containers: []
	W1223 00:06:37.998815  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:37.998861  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:38.019815  687772 logs.go:282] 0 containers: []
	W1223 00:06:38.019847  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:38.019910  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:38.042524  687772 logs.go:282] 0 containers: []
	W1223 00:06:38.042552  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:38.042617  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:38.061464  687772 logs.go:282] 0 containers: []
	W1223 00:06:38.061489  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:38.061544  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:38.080482  687772 logs.go:282] 0 containers: []
	W1223 00:06:38.080509  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:38.080558  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:38.099189  687772 logs.go:282] 0 containers: []
	W1223 00:06:38.099215  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:38.099279  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:38.118161  687772 logs.go:282] 0 containers: []
	W1223 00:06:38.118188  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:38.118244  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:38.136752  687772 logs.go:282] 0 containers: []
	W1223 00:06:38.136786  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:38.136803  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:38.136819  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:38.182751  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:38.182779  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:38.202352  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:38.202375  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:38.257901  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:38.250382   20532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:38.251009   20532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:38.252656   20532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:38.253166   20532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:38.254694   20532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:38.250382   20532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:38.251009   20532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:38.252656   20532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:38.253166   20532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:38.254694   20532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:38.257922  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:38.257933  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:38.276963  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:38.276988  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:40.806792  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:40.818244  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:40.837324  687772 logs.go:282] 0 containers: []
	W1223 00:06:40.837348  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:40.837402  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:40.856364  687772 logs.go:282] 0 containers: []
	W1223 00:06:40.856387  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:40.856453  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:40.874753  687772 logs.go:282] 0 containers: []
	W1223 00:06:40.874780  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:40.874831  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:40.893167  687772 logs.go:282] 0 containers: []
	W1223 00:06:40.893193  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:40.893242  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:40.910901  687772 logs.go:282] 0 containers: []
	W1223 00:06:40.910924  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:40.910976  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:40.930108  687772 logs.go:282] 0 containers: []
	W1223 00:06:40.930133  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:40.930191  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:40.949021  687772 logs.go:282] 0 containers: []
	W1223 00:06:40.949047  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:40.949101  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:40.967221  687772 logs.go:282] 0 containers: []
	W1223 00:06:40.967246  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:40.967260  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:40.967276  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:40.988752  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:40.988779  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:41.048349  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:41.040501   20693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:41.041135   20693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:41.042880   20693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:41.043313   20693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:41.044930   20693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:41.040501   20693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:41.041135   20693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:41.042880   20693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:41.043313   20693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:41.044930   20693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:41.048374  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:41.048387  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:41.067112  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:41.067138  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:41.093421  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:41.093445  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:43.639363  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:43.653263  687772 out.go:203] 
	W1223 00:06:43.654345  687772 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1223 00:06:43.654374  687772 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1223 00:06:43.654383  687772 out.go:285] * Related issues:
	W1223 00:06:43.654397  687772 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1223 00:06:43.654411  687772 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1223 00:06:43.655505  687772 out.go:203] 
	
	
	==> Docker <==
	Dec 23 00:00:40 newest-cni-348344 dockerd[918]: time="2025-12-23T00:00:40.188726089Z" level=info msg="Restoring containers: start."
	Dec 23 00:00:40 newest-cni-348344 dockerd[918]: time="2025-12-23T00:00:40.201365877Z" level=info msg="Deleting nftables IPv4 rules" error="exit status 1"
	Dec 23 00:00:40 newest-cni-348344 dockerd[918]: time="2025-12-23T00:00:40.219292925Z" level=info msg="Deleting nftables IPv6 rules" error="exit status 1"
	Dec 23 00:00:40 newest-cni-348344 dockerd[918]: time="2025-12-23T00:00:40.739437960Z" level=info msg="Loading containers: done."
	Dec 23 00:00:40 newest-cni-348344 dockerd[918]: time="2025-12-23T00:00:40.749914456Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 23 00:00:40 newest-cni-348344 dockerd[918]: time="2025-12-23T00:00:40.749957470Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 23 00:00:40 newest-cni-348344 dockerd[918]: time="2025-12-23T00:00:40.750005698Z" level=info msg="Initializing buildkit"
	Dec 23 00:00:40 newest-cni-348344 dockerd[918]: time="2025-12-23T00:00:40.769087870Z" level=info msg="Completed buildkit initialization"
	Dec 23 00:00:40 newest-cni-348344 dockerd[918]: time="2025-12-23T00:00:40.777195260Z" level=info msg="Daemon has completed initialization"
	Dec 23 00:00:40 newest-cni-348344 dockerd[918]: time="2025-12-23T00:00:40.777258358Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 23 00:00:40 newest-cni-348344 dockerd[918]: time="2025-12-23T00:00:40.777420063Z" level=info msg="API listen on /run/docker.sock"
	Dec 23 00:00:40 newest-cni-348344 dockerd[918]: time="2025-12-23T00:00:40.777484333Z" level=info msg="API listen on [::]:2376"
	Dec 23 00:00:40 newest-cni-348344 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 23 00:00:41 newest-cni-348344 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 23 00:00:41 newest-cni-348344 cri-dockerd[1209]: time="2025-12-23T00:00:41Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 23 00:00:41 newest-cni-348344 cri-dockerd[1209]: time="2025-12-23T00:00:41Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 23 00:00:41 newest-cni-348344 cri-dockerd[1209]: time="2025-12-23T00:00:41Z" level=info msg="Start docker client with request timeout 0s"
	Dec 23 00:00:41 newest-cni-348344 cri-dockerd[1209]: time="2025-12-23T00:00:41Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 23 00:00:41 newest-cni-348344 cri-dockerd[1209]: time="2025-12-23T00:00:41Z" level=info msg="Loaded network plugin cni"
	Dec 23 00:00:41 newest-cni-348344 cri-dockerd[1209]: time="2025-12-23T00:00:41Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 23 00:00:41 newest-cni-348344 cri-dockerd[1209]: time="2025-12-23T00:00:41Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 23 00:00:41 newest-cni-348344 cri-dockerd[1209]: time="2025-12-23T00:00:41Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 23 00:00:41 newest-cni-348344 cri-dockerd[1209]: time="2025-12-23T00:00:41Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 23 00:00:41 newest-cni-348344 cri-dockerd[1209]: time="2025-12-23T00:00:41Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 23 00:00:41 newest-cni-348344 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:45.803951   20926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:45.804398   20926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:45.805917   20926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:45.806244   20926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:45.807749   20926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 32 44 b0 85 99 75 08 06
	[  +2.519484] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ca 64 f4 88 60 6a 08 06
	[  +0.000472] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 42 41 81 ba 80 a4 08 06
	[Dec22 23:59] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e 60 1e 9e f0 0c 08 06
	[  +0.088099] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff f6 12 57 26 ed f1 08 06
	[  +5.341024] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 46 24 97 27 5a ed 08 06
	[ +14.537406] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff da 72 df 3b 35 8d 08 06
	[  +0.000388] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 5e 60 1e 9e f0 0c 08 06
	[  +2.465032] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 84 3f 6a 28 22 08 06
	[  +0.000373] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 46 24 97 27 5a ed 08 06
	[Dec23 00:00] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 4e 53 f0 1e af dd 08 06
	[Dec23 00:01] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 20 71 68 66 a5 08 06
	[  +0.000346] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 4e 53 f0 1e af dd 08 06
	
	
	==> kernel <==
	 00:06:45 up  3:49,  0 user,  load average: 0.36, 1.42, 1.66
	Linux newest-cni-348344 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 23 00:06:42 newest-cni-348344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 23 00:06:43 newest-cni-348344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 482.
	Dec 23 00:06:43 newest-cni-348344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 23 00:06:43 newest-cni-348344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 23 00:06:43 newest-cni-348344 kubelet[20754]: E1223 00:06:43.285859   20754 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 23 00:06:43 newest-cni-348344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 23 00:06:43 newest-cni-348344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 23 00:06:43 newest-cni-348344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 483.
	Dec 23 00:06:43 newest-cni-348344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 23 00:06:43 newest-cni-348344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 23 00:06:44 newest-cni-348344 kubelet[20767]: E1223 00:06:44.052057   20767 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 23 00:06:44 newest-cni-348344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 23 00:06:44 newest-cni-348344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 23 00:06:44 newest-cni-348344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 484.
	Dec 23 00:06:44 newest-cni-348344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 23 00:06:44 newest-cni-348344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 23 00:06:44 newest-cni-348344 kubelet[20792]: E1223 00:06:44.788204   20792 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 23 00:06:44 newest-cni-348344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 23 00:06:44 newest-cni-348344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 23 00:06:45 newest-cni-348344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 485.
	Dec 23 00:06:45 newest-cni-348344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 23 00:06:45 newest-cni-348344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 23 00:06:45 newest-cni-348344 kubelet[20805]: E1223 00:06:45.533345   20805 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 23 00:06:45 newest-cni-348344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 23 00:06:45 newest-cni-348344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-348344 -n newest-cni-348344
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-348344 -n newest-cni-348344: exit status 2 (302.278327ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "newest-cni-348344" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (372.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:02:37.496852   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/auto-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:02:53.979539   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/calico-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:02:54.254196   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/custom-flannel-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:03:06.547932   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/enable-default-cni-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:03:06.553219   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/enable-default-cni-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:03:06.563473   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/enable-default-cni-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:03:06.583736   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/enable-default-cni-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:03:06.624065   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/enable-default-cni-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:03:06.704398   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/enable-default-cni-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:03:06.864865   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/enable-default-cni-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:03:07.185498   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/enable-default-cni-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:03:07.825898   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/enable-default-cni-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:03:09.060359   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/false-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:03:09.065663   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/false-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:03:09.075923   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/false-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:03:09.096182   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/false-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:03:09.106396   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/enable-default-cni-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:03:09.136572   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/false-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:03:09.216905   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/false-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:03:09.377410   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/false-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:03:09.698345   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/false-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:03:10.338575   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/false-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:03:11.619619   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/false-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:03:11.666905   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/enable-default-cni-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:03:14.179847   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/false-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:03:16.787934   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/enable-default-cni-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:03:19.300702   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/false-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:03:27.029114   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/enable-default-cni-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:03:29.286544   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/kindnet-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:03:29.541332   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/false-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:03:34.090258   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/addons-268945/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:03:38.109689   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/skaffold-356784/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:03:47.510183   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/enable-default-cni-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:03:50.021608   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/false-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:03:56.971264   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/kindnet-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:04:06.387107   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/default-k8s-diff-port-700304/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:04:16.174580   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/custom-flannel-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:04:21.986132   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/flannel-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:04:21.991474   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/flannel-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:04:22.001789   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/flannel-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:04:22.022587   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/flannel-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:04:22.063266   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/flannel-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:04:22.143897   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/flannel-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:04:22.304317   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/flannel-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:04:22.624614   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/flannel-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:04:23.265589   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/flannel-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:04:23.975323   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/bridge-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:04:23.980611   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/bridge-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:04:23.990864   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/bridge-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:04:24.011109   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/bridge-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:04:24.051353   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/bridge-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:04:24.131665   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/bridge-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:04:24.292131   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/bridge-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:04:24.546038   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/flannel-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:04:24.613230   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/bridge-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:04:25.253803   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/bridge-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:04:26.534444   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/bridge-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:04:27.106352   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/flannel-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:04:28.471302   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/enable-default-cni-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:04:29.095157   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/bridge-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:04:30.981929   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/false-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:04:32.227160   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/flannel-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:04:34.215697   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/bridge-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:04:36.487534   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:04:42.468265   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/flannel-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:04:44.456034   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/bridge-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:04:53.441028   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:05:02.948643   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/flannel-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:05:04.936826   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/bridge-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:05:10.134920   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/calico-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:05:31.032871   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/addons-268945/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:05:37.819888   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/calico-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:05:43.909293   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/flannel-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:05:45.898064   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/bridge-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:05:50.392530   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/enable-default-cni-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:05:52.902991   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/false-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:06:01.447750   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/kubenet-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:06:01.453019   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/kubenet-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:06:01.463311   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/kubenet-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:06:01.483556   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/kubenet-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:06:01.523810   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/kubenet-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:06:01.604161   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/kubenet-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:06:01.764564   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/kubenet-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:06:02.085144   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/kubenet-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:06:02.725471   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/kubenet-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:06:04.006497   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/kubenet-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:06:06.567522   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/kubenet-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:06:11.688376   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/kubenet-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:06:21.929061   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/kubenet-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:06:30.659829   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-580825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:06:32.331564   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/custom-flannel-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:06:42.409252   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/kubenet-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:07:00.015820   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/custom-flannel-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:07:05.829720   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/flannel-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:07:07.819281   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/bridge-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:07:09.814076   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/auto-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:07:23.370050   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/kubenet-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:08:06.548313   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/enable-default-cni-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:08:09.060077   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/false-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:08:16.477977   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/old-k8s-version-687073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:08:29.286175   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/kindnet-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:08:34.233445   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/enable-default-cni-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:08:36.743954   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/false-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:08:38.109921   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/skaffold-356784/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:08:45.290809   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/kubenet-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:09:06.387667   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/default-k8s-diff-port-700304/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:09:21.986026   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/flannel-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:09:23.975726   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/bridge-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:09:33.717387   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-580825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:09:49.670175   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/flannel-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:09:51.659690   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/bridge-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:09:53.441141   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:10:10.134648   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/calico-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:10:29.431514   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/default-k8s-diff-port-700304/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:10:31.033042   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/addons-268945/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:11:01.447704   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/kubenet-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
start_stop_delete_test.go:272: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-063943 -n no-preload-063943
start_stop_delete_test.go:272: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-063943 -n no-preload-063943: exit status 2 (304.339388ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:272: status error: exit status 2 (may be ok)
start_stop_delete_test.go:272: "no-preload-063943" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-063943
helpers_test.go:244: (dbg) docker inspect no-preload-063943:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "786df4b777717287f11f0ef2eab8115dad6a21597d5995b3b84e35ed2328cebc",
	        "Created": "2025-12-22T23:45:49.557145486Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 622978,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T23:56:07.024549385Z",
	            "FinishedAt": "2025-12-22T23:56:05.577772514Z"
	        },
	        "Image": "sha256:9a87e850a5e640dd3e5f71477885272b970ba271e3722be8bebbe0157f704ffd",
	        "ResolvConfPath": "/var/lib/docker/containers/786df4b777717287f11f0ef2eab8115dad6a21597d5995b3b84e35ed2328cebc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/786df4b777717287f11f0ef2eab8115dad6a21597d5995b3b84e35ed2328cebc/hostname",
	        "HostsPath": "/var/lib/docker/containers/786df4b777717287f11f0ef2eab8115dad6a21597d5995b3b84e35ed2328cebc/hosts",
	        "LogPath": "/var/lib/docker/containers/786df4b777717287f11f0ef2eab8115dad6a21597d5995b3b84e35ed2328cebc/786df4b777717287f11f0ef2eab8115dad6a21597d5995b3b84e35ed2328cebc-json.log",
	        "Name": "/no-preload-063943",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-063943:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-063943",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "786df4b777717287f11f0ef2eab8115dad6a21597d5995b3b84e35ed2328cebc",
	                "LowerDir": "/var/lib/docker/overlay2/29902a9fc8792c76fa85dc5a0de0b07f3c2e185c6d971af2f6ebff298763d0a3-init/diff:/var/lib/docker/overlay2/c57dd1a41102d99c4ed6be3c60b871435428bd2cea6a3d8d172f0a67527ba009/diff",
	                "MergedDir": "/var/lib/docker/overlay2/29902a9fc8792c76fa85dc5a0de0b07f3c2e185c6d971af2f6ebff298763d0a3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/29902a9fc8792c76fa85dc5a0de0b07f3c2e185c6d971af2f6ebff298763d0a3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/29902a9fc8792c76fa85dc5a0de0b07f3c2e185c6d971af2f6ebff298763d0a3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-063943",
	                "Source": "/var/lib/docker/volumes/no-preload-063943/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-063943",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-063943",
	                "name.minikube.sigs.k8s.io": "no-preload-063943",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "e615544c2ed8dc279a0d7bd7031d234c4bd36d86ac886a8680dbb0ce786c6bb0",
	            "SandboxKey": "/var/run/docker/netns/e615544c2ed8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-063943": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6fe1a4d651e77a6056be2344adfa00e0a1474c8d315239814c9f2b4594dd53fd",
	                    "EndpointID": "77c3f5905f39c7f705fb61bcc99a23730dfec3ccc0be5afe97e05c39881c936c",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "b6:1b:7b:4d:bd:50",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-063943",
	                        "786df4b77771"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-063943 -n no-preload-063943
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-063943 -n no-preload-063943: exit status 2 (297.141178ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-063943 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-063943 logs -n 25: (1.18839951s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                      ARGS                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p kubenet-003676 sudo cat /var/lib/kubelet/config.yaml                         │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo systemctl status docker --all --full --no-pager          │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo systemctl cat docker --no-pager                          │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo cat /etc/docker/daemon.json                              │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo docker system info                                       │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo systemctl status cri-docker --all --full --no-pager      │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo systemctl cat cri-docker --no-pager                      │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo cat /usr/lib/systemd/system/cri-docker.service           │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo cri-dockerd --version                                    │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo systemctl status containerd --all --full --no-pager      │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo systemctl cat containerd --no-pager                      │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo cat /lib/systemd/system/containerd.service               │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo cat /etc/containerd/config.toml                          │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo containerd config dump                                   │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo systemctl status crio --all --full --no-pager            │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │                     │
	│ ssh     │ -p kubenet-003676 sudo systemctl cat crio --no-pager                            │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;  │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo crio config                                              │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ delete  │ -p kubenet-003676                                                               │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ image   │ newest-cni-348344 image list --format=json                                      │ newest-cni-348344 │ jenkins │ v1.37.0 │ 23 Dec 25 00:06 UTC │ 23 Dec 25 00:06 UTC │
	│ pause   │ -p newest-cni-348344 --alsologtostderr -v=1                                     │ newest-cni-348344 │ jenkins │ v1.37.0 │ 23 Dec 25 00:06 UTC │ 23 Dec 25 00:06 UTC │
	│ unpause │ -p newest-cni-348344 --alsologtostderr -v=1                                     │ newest-cni-348344 │ jenkins │ v1.37.0 │ 23 Dec 25 00:06 UTC │ 23 Dec 25 00:06 UTC │
	│ delete  │ -p newest-cni-348344                                                            │ newest-cni-348344 │ jenkins │ v1.37.0 │ 23 Dec 25 00:06 UTC │ 23 Dec 25 00:06 UTC │
	│ delete  │ -p newest-cni-348344                                                            │ newest-cni-348344 │ jenkins │ v1.37.0 │ 23 Dec 25 00:06 UTC │ 23 Dec 25 00:06 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/23 00:00:34
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1223 00:00:34.066824  687772 out.go:360] Setting OutFile to fd 1 ...
	I1223 00:00:34.067051  687772 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1223 00:00:34.067058  687772 out.go:374] Setting ErrFile to fd 2...
	I1223 00:00:34.067063  687772 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1223 00:00:34.067257  687772 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-72233/.minikube/bin
	I1223 00:00:34.067701  687772 out.go:368] Setting JSON to false
	I1223 00:00:34.068753  687772 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":13374,"bootTime":1766434660,"procs":281,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1223 00:00:34.068805  687772 start.go:143] virtualization: kvm guest
	W1223 00:00:29.964565  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:31.965119  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:33.965297  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	I1223 00:00:34.070524  687772 out.go:179] * [newest-cni-348344] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1223 00:00:34.072192  687772 notify.go:221] Checking for updates...
	I1223 00:00:34.072201  687772 out.go:179]   - MINIKUBE_LOCATION=22301
	I1223 00:00:34.073912  687772 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1223 00:00:34.074996  687772 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1223 00:00:34.076047  687772 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-72233/.minikube
	I1223 00:00:34.077175  687772 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1223 00:00:34.078295  687772 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1223 00:00:34.079882  687772 config.go:182] Loaded profile config "newest-cni-348344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1223 00:00:34.080446  687772 driver.go:422] Setting default libvirt URI to qemu:///system
	I1223 00:00:34.106101  687772 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1223 00:00:34.106213  687772 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1223 00:00:34.161275  687772 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:64 SystemTime:2025-12-23 00:00:34.151129133 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1223 00:00:34.161373  687772 docker.go:319] overlay module found
	I1223 00:00:34.163775  687772 out.go:179] * Using the docker driver based on existing profile
	I1223 00:00:34.164711  687772 start.go:309] selected driver: docker
	I1223 00:00:34.164723  687772 start.go:928] validating driver "docker" against &{Name:newest-cni-348344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-348344 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1223 00:00:34.164829  687772 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1223 00:00:34.165640  687772 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1223 00:00:34.231050  687772 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:64 SystemTime:2025-12-23 00:00:34.221418272 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1223 00:00:34.231362  687772 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1223 00:00:34.231388  687772 cni.go:84] Creating CNI manager for ""
	I1223 00:00:34.231454  687772 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1223 00:00:34.231489  687772 start.go:353] cluster config:
	{Name:newest-cni-348344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-348344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1223 00:00:34.233188  687772 out.go:179] * Starting "newest-cni-348344" primary control-plane node in "newest-cni-348344" cluster
	I1223 00:00:34.234320  687772 cache.go:134] Beginning downloading kic base image for docker with docker
	I1223 00:00:34.235503  687772 out.go:179] * Pulling base image v0.0.48-1766394456-22288 ...
	I1223 00:00:34.236442  687772 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1223 00:00:34.236471  687772 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4
	I1223 00:00:34.236486  687772 cache.go:65] Caching tarball of preloaded images
	I1223 00:00:34.236540  687772 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local docker daemon
	I1223 00:00:34.236575  687772 preload.go:251] Found /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1223 00:00:34.236586  687772 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on docker
	I1223 00:00:34.236749  687772 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/config.json ...
	I1223 00:00:34.256540  687772 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local docker daemon, skipping pull
	I1223 00:00:34.256557  687772 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 exists in daemon, skipping load
	I1223 00:00:34.256572  687772 cache.go:243] Successfully downloaded all kic artifacts
	I1223 00:00:34.256623  687772 start.go:360] acquireMachinesLock for newest-cni-348344: {Name:mk26cd248e0bcd2d8f2e8a824868ba7de6c9c6f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1223 00:00:34.256695  687772 start.go:364] duration metric: took 39.918µs to acquireMachinesLock for "newest-cni-348344"
	I1223 00:00:34.256714  687772 start.go:96] Skipping create...Using existing machine configuration
	I1223 00:00:34.256719  687772 fix.go:54] fixHost starting: 
	I1223 00:00:34.256918  687772 cli_runner.go:164] Run: docker container inspect newest-cni-348344 --format={{.State.Status}}
	I1223 00:00:34.273955  687772 fix.go:112] recreateIfNeeded on newest-cni-348344: state=Stopped err=<nil>
	W1223 00:00:34.273978  687772 fix.go:138] unexpected machine state, will restart: <nil>
	W1223 00:00:31.998314  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:34.497435  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:36.498048  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:00:34.276035  687772 out.go:252] * Restarting existing docker container for "newest-cni-348344" ...
	I1223 00:00:34.276100  687772 cli_runner.go:164] Run: docker start newest-cni-348344
	I1223 00:00:34.520995  687772 cli_runner.go:164] Run: docker container inspect newest-cni-348344 --format={{.State.Status}}
	I1223 00:00:34.540249  687772 kic.go:430] container "newest-cni-348344" state is running.
	I1223 00:00:34.540736  687772 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-348344
	I1223 00:00:34.560319  687772 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/config.json ...
	I1223 00:00:34.560718  687772 machine.go:94] provisionDockerMachine start ...
	I1223 00:00:34.560825  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:34.581907  687772 main.go:144] libmachine: Using SSH client type: native
	I1223 00:00:34.582194  687772 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33168 <nil> <nil>}
	I1223 00:00:34.582211  687772 main.go:144] libmachine: About to run SSH command:
	hostname
	I1223 00:00:34.583095  687772 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41172->127.0.0.1:33168: read: connection reset by peer
	I1223 00:00:37.726578  687772 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-348344
	
	I1223 00:00:37.726621  687772 ubuntu.go:182] provisioning hostname "newest-cni-348344"
	I1223 00:00:37.726764  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:37.746947  687772 main.go:144] libmachine: Using SSH client type: native
	I1223 00:00:37.747183  687772 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33168 <nil> <nil>}
	I1223 00:00:37.747203  687772 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-348344 && echo "newest-cni-348344" | sudo tee /etc/hostname
	I1223 00:00:37.900818  687772 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-348344
	
	I1223 00:00:37.900900  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:37.919317  687772 main.go:144] libmachine: Using SSH client type: native
	I1223 00:00:37.919561  687772 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33168 <nil> <nil>}
	I1223 00:00:37.919579  687772 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-348344' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-348344/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-348344' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1223 00:00:38.062239  687772 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1223 00:00:38.062284  687772 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22301-72233/.minikube CaCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22301-72233/.minikube}
	I1223 00:00:38.062331  687772 ubuntu.go:190] setting up certificates
	I1223 00:00:38.062344  687772 provision.go:84] configureAuth start
	I1223 00:00:38.062400  687772 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-348344
	I1223 00:00:38.081263  687772 provision.go:143] copyHostCerts
	I1223 00:00:38.081355  687772 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem, removing ...
	I1223 00:00:38.081386  687772 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem
	I1223 00:00:38.081497  687772 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem (1082 bytes)
	I1223 00:00:38.081760  687772 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem, removing ...
	I1223 00:00:38.081785  687772 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem
	I1223 00:00:38.081851  687772 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem (1123 bytes)
	I1223 00:00:38.082007  687772 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem, removing ...
	I1223 00:00:38.082027  687772 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem
	I1223 00:00:38.082101  687772 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem (1679 bytes)
	I1223 00:00:38.082238  687772 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem org=jenkins.newest-cni-348344 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-348344]
	I1223 00:00:38.170695  687772 provision.go:177] copyRemoteCerts
	I1223 00:00:38.170759  687772 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1223 00:00:38.170815  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:38.189123  687772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/newest-cni-348344/id_rsa Username:docker}
	I1223 00:00:38.291920  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1223 00:00:38.309166  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1223 00:00:38.326166  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1223 00:00:38.344564  687772 provision.go:87] duration metric: took 282.199681ms to configureAuth
	I1223 00:00:38.344627  687772 ubuntu.go:206] setting minikube options for container-runtime
	I1223 00:00:38.344911  687772 config.go:182] Loaded profile config "newest-cni-348344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1223 00:00:38.344995  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:38.366260  687772 main.go:144] libmachine: Using SSH client type: native
	I1223 00:00:38.366529  687772 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33168 <nil> <nil>}
	I1223 00:00:38.366545  687772 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1223 00:00:38.510728  687772 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1223 00:00:38.510754  687772 ubuntu.go:71] root file system type: overlay
	I1223 00:00:38.510908  687772 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1223 00:00:38.510979  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:38.532018  687772 main.go:144] libmachine: Using SSH client type: native
	I1223 00:00:38.532329  687772 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33168 <nil> <nil>}
	I1223 00:00:38.532458  687772 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1223 00:00:38.686369  687772 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1223 00:00:38.686443  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:38.704974  687772 main.go:144] libmachine: Using SSH client type: native
	I1223 00:00:38.705187  687772 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33168 <nil> <nil>}
	I1223 00:00:38.705204  687772 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1223 00:00:38.853249  687772 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1223 00:00:38.853285  687772 machine.go:97] duration metric: took 4.292539002s to provisionDockerMachine
	I1223 00:00:38.853303  687772 start.go:293] postStartSetup for "newest-cni-348344" (driver="docker")
	I1223 00:00:38.853321  687772 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1223 00:00:38.853419  687772 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1223 00:00:38.853487  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:38.872421  687772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/newest-cni-348344/id_rsa Username:docker}
	I1223 00:00:38.979494  687772 ssh_runner.go:195] Run: cat /etc/os-release
	I1223 00:00:38.983309  687772 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1223 00:00:38.983340  687772 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1223 00:00:38.983352  687772 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-72233/.minikube/addons for local assets ...
	I1223 00:00:38.983401  687772 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-72233/.minikube/files for local assets ...
	I1223 00:00:38.983478  687772 filesync.go:149] local asset: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem -> 758032.pem in /etc/ssl/certs
	I1223 00:00:38.983566  687772 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1223 00:00:38.991328  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem --> /etc/ssl/certs/758032.pem (1708 bytes)
	I1223 00:00:39.009038  687772 start.go:296] duration metric: took 155.718049ms for postStartSetup
	I1223 00:00:39.009116  687772 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1223 00:00:39.009156  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:39.028095  687772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/newest-cni-348344/id_rsa Username:docker}
	W1223 00:00:36.464751  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:38.465798  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	I1223 00:00:39.126878  687772 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1223 00:00:39.131527  687772 fix.go:56] duration metric: took 4.874800463s for fixHost
	I1223 00:00:39.131555  687772 start.go:83] releasing machines lock for "newest-cni-348344", held for 4.874847678s
	I1223 00:00:39.131653  687772 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-348344
	I1223 00:00:39.149572  687772 ssh_runner.go:195] Run: cat /version.json
	I1223 00:00:39.149644  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:39.149684  687772 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1223 00:00:39.149791  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:39.169897  687772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/newest-cni-348344/id_rsa Username:docker}
	I1223 00:00:39.170262  687772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/newest-cni-348344/id_rsa Username:docker}
	I1223 00:00:39.329086  687772 ssh_runner.go:195] Run: systemctl --version
	I1223 00:00:39.336405  687772 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1223 00:00:39.341291  687772 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1223 00:00:39.341348  687772 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1223 00:00:39.349279  687772 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1223 00:00:39.349306  687772 start.go:496] detecting cgroup driver to use...
	I1223 00:00:39.349351  687772 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1223 00:00:39.349509  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1223 00:00:39.363512  687772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1223 00:00:39.372815  687772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1223 00:00:39.381448  687772 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1223 00:00:39.381508  687772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1223 00:00:39.390109  687772 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1223 00:00:39.399162  687772 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1223 00:00:39.408060  687772 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1223 00:00:39.416584  687772 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1223 00:00:39.424711  687772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1223 00:00:39.433432  687772 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1223 00:00:39.442056  687772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1223 00:00:39.450905  687772 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1223 00:00:39.458512  687772 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1223 00:00:39.466991  687772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1223 00:00:39.550442  687772 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1223 00:00:39.632902  687772 start.go:496] detecting cgroup driver to use...
	I1223 00:00:39.632954  687772 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1223 00:00:39.633002  687772 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1223 00:00:39.646974  687772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1223 00:00:39.659240  687772 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1223 00:00:39.675979  687772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1223 00:00:39.688406  687772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1223 00:00:39.700912  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1223 00:00:39.715235  687772 ssh_runner.go:195] Run: which cri-dockerd
	I1223 00:00:39.718945  687772 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1223 00:00:39.726972  687772 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1223 00:00:39.739559  687772 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1223 00:00:39.825824  687772 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1223 00:00:39.909620  687772 docker.go:578] configuring docker to use "cgroupfs" as cgroup driver...
	I1223 00:00:39.909743  687772 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1223 00:00:39.923453  687772 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1223 00:00:39.935401  687772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1223 00:00:40.031388  687772 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1223 00:00:40.779801  687772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1223 00:00:40.794700  687772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1223 00:00:40.807727  687772 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1223 00:00:40.821730  687772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1223 00:00:40.835038  687772 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1223 00:00:40.917554  687772 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1223 00:00:41.006720  687772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1223 00:00:41.090943  687772 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1223 00:00:41.119047  687772 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1223 00:00:41.131277  687772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1223 00:00:41.215437  687772 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1223 00:00:41.283622  687772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1223 00:00:41.298485  687772 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1223 00:00:41.298551  687772 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1223 00:00:41.302554  687772 start.go:564] Will wait 60s for crictl version
	I1223 00:00:41.302645  687772 ssh_runner.go:195] Run: which crictl
	I1223 00:00:41.306307  687772 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1223 00:00:41.332425  687772 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1223 00:00:41.332495  687772 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1223 00:00:41.358777  687772 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1223 00:00:41.385668  687772 out.go:252] * Preparing Kubernetes v1.35.0-rc.1 on Docker 29.1.3 ...
	I1223 00:00:41.385749  687772 cli_runner.go:164] Run: docker network inspect newest-cni-348344 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1223 00:00:41.403696  687772 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1223 00:00:41.407961  687772 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1223 00:00:41.419714  687772 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1223 00:00:38.997749  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:40.998211  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:00:41.420647  687772 kubeadm.go:884] updating cluster {Name:newest-cni-348344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-348344 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1223 00:00:41.420848  687772 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1223 00:00:41.420937  687772 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1223 00:00:41.444049  687772 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1223 00:00:41.444075  687772 docker.go:624] Images already preloaded, skipping extraction
	I1223 00:00:41.444128  687772 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1223 00:00:41.466181  687772 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1223 00:00:41.466206  687772 cache_images.go:86] Images are preloaded, skipping loading
	I1223 00:00:41.466215  687772 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0-rc.1 docker true true} ...
	I1223 00:00:41.466312  687772 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-348344 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-348344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1223 00:00:41.466372  687772 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1223 00:00:41.520065  687772 cni.go:84] Creating CNI manager for ""
	I1223 00:00:41.520097  687772 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1223 00:00:41.520116  687772 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1223 00:00:41.520151  687772 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-348344 NodeName:newest-cni-348344 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1223 00:00:41.520280  687772 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-348344"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1223 00:00:41.520348  687772 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1223 00:00:41.529909  687772 binaries.go:51] Found k8s binaries, skipping transfer
	I1223 00:00:41.529987  687772 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1223 00:00:41.538670  687772 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1223 00:00:41.552566  687772 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1223 00:00:41.565766  687772 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1223 00:00:41.578604  687772 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1223 00:00:41.582388  687772 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1223 00:00:41.592239  687772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1223 00:00:41.689716  687772 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1223 00:00:41.716265  687772 certs.go:69] Setting up /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344 for IP: 192.168.94.2
	I1223 00:00:41.716293  687772 certs.go:195] generating shared ca certs ...
	I1223 00:00:41.716315  687772 certs.go:227] acquiring lock for ca certs: {Name:mk952cc8302daab7c0050aedd5db4002f6808128 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1223 00:00:41.716492  687772 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.key
	I1223 00:00:41.716548  687772 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.key
	I1223 00:00:41.716557  687772 certs.go:257] generating profile certs ...
	I1223 00:00:41.716731  687772 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/client.key
	I1223 00:00:41.716814  687772 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/apiserver.key.3654ac73
	I1223 00:00:41.716864  687772 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/proxy-client.key
	I1223 00:00:41.716992  687772 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803.pem (1338 bytes)
	W1223 00:00:41.717032  687772 certs.go:480] ignoring /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803_empty.pem, impossibly tiny 0 bytes
	I1223 00:00:41.717041  687772 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem (1675 bytes)
	I1223 00:00:41.717076  687772 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem (1082 bytes)
	I1223 00:00:41.717110  687772 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem (1123 bytes)
	I1223 00:00:41.717142  687772 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem (1679 bytes)
	I1223 00:00:41.717206  687772 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem (1708 bytes)
	I1223 00:00:41.718210  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1223 00:00:41.739858  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1223 00:00:41.759304  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1223 00:00:41.777433  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1223 00:00:41.794878  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1223 00:00:41.811695  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1223 00:00:41.829786  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1223 00:00:41.846666  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1223 00:00:41.863546  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem --> /usr/share/ca-certificates/758032.pem (1708 bytes)
	I1223 00:00:41.880762  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1223 00:00:41.898275  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803.pem --> /usr/share/ca-certificates/75803.pem (1338 bytes)
	I1223 00:00:41.915259  687772 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1223 00:00:41.927541  687772 ssh_runner.go:195] Run: openssl version
	I1223 00:00:41.933577  687772 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1223 00:00:41.940758  687772 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1223 00:00:41.948346  687772 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1223 00:00:41.952096  687772 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 22:33 /usr/share/ca-certificates/minikubeCA.pem
	I1223 00:00:41.952140  687772 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1223 00:00:41.987400  687772 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1223 00:00:41.995158  687772 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/75803.pem
	I1223 00:00:42.002657  687772 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/75803.pem /etc/ssl/certs/75803.pem
	I1223 00:00:42.010346  687772 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75803.pem
	I1223 00:00:42.014050  687772 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 22:42 /usr/share/ca-certificates/75803.pem
	I1223 00:00:42.014097  687772 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75803.pem
	I1223 00:00:42.048067  687772 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1223 00:00:42.055784  687772 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/758032.pem
	I1223 00:00:42.063393  687772 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/758032.pem /etc/ssl/certs/758032.pem
	I1223 00:00:42.071081  687772 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/758032.pem
	I1223 00:00:42.075446  687772 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 22:42 /usr/share/ca-certificates/758032.pem
	I1223 00:00:42.075515  687772 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/758032.pem
	I1223 00:00:42.110438  687772 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1223 00:00:42.118365  687772 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1223 00:00:42.122225  687772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1223 00:00:42.157294  687772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1223 00:00:42.192810  687772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1223 00:00:42.226497  687772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1223 00:00:42.261017  687772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1223 00:00:42.299910  687772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1223 00:00:42.335335  687772 kubeadm.go:401] StartCluster: {Name:newest-cni-348344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-348344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1223 00:00:42.335492  687772 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1223 00:00:42.355576  687772 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1223 00:00:42.364792  687772 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1223 00:00:42.364813  687772 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1223 00:00:42.364868  687772 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1223 00:00:42.373141  687772 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1223 00:00:42.374088  687772 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-348344" does not appear in /home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1223 00:00:42.374674  687772 kubeconfig.go:62] /home/jenkins/minikube-integration/22301-72233/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-348344" cluster setting kubeconfig missing "newest-cni-348344" context setting]
	I1223 00:00:42.375665  687772 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/kubeconfig: {Name:mkabb5ea92c3fe748f610038efb5c58128364c71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1223 00:00:42.377667  687772 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1223 00:00:42.385784  687772 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1223 00:00:42.385817  687772 kubeadm.go:602] duration metric: took 20.99658ms to restartPrimaryControlPlane
	I1223 00:00:42.385829  687772 kubeadm.go:403] duration metric: took 50.508354ms to StartCluster
	I1223 00:00:42.385848  687772 settings.go:142] acquiring lock: {Name:mk05aa406dacdbba79fec0b7e7f355491ea46bf8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1223 00:00:42.385918  687772 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1223 00:00:42.387577  687772 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/kubeconfig: {Name:mkabb5ea92c3fe748f610038efb5c58128364c71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1223 00:00:42.387909  687772 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1223 00:00:42.388038  687772 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1223 00:00:42.388131  687772 config.go:182] Loaded profile config "newest-cni-348344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1223 00:00:42.388136  687772 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-348344"
	I1223 00:00:42.388154  687772 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-348344"
	I1223 00:00:42.388170  687772 addons.go:70] Setting dashboard=true in profile "newest-cni-348344"
	I1223 00:00:42.388189  687772 addons.go:70] Setting default-storageclass=true in profile "newest-cni-348344"
	I1223 00:00:42.388208  687772 addons.go:239] Setting addon dashboard=true in "newest-cni-348344"
	W1223 00:00:42.388222  687772 addons.go:248] addon dashboard should already be in state true
	I1223 00:00:42.388226  687772 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-348344"
	I1223 00:00:42.388262  687772 host.go:66] Checking if "newest-cni-348344" exists ...
	I1223 00:00:42.388184  687772 host.go:66] Checking if "newest-cni-348344" exists ...
	I1223 00:00:42.388611  687772 cli_runner.go:164] Run: docker container inspect newest-cni-348344 --format={{.State.Status}}
	I1223 00:00:42.388764  687772 cli_runner.go:164] Run: docker container inspect newest-cni-348344 --format={{.State.Status}}
	I1223 00:00:42.388771  687772 cli_runner.go:164] Run: docker container inspect newest-cni-348344 --format={{.State.Status}}
	I1223 00:00:42.390679  687772 out.go:179] * Verifying Kubernetes components...
	I1223 00:00:42.391709  687772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1223 00:00:42.411960  687772 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1223 00:00:42.411964  687772 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1223 00:00:42.412088  687772 addons.go:239] Setting addon default-storageclass=true in "newest-cni-348344"
	I1223 00:00:42.412122  687772 host.go:66] Checking if "newest-cni-348344" exists ...
	I1223 00:00:42.412456  687772 cli_runner.go:164] Run: docker container inspect newest-cni-348344 --format={{.State.Status}}
	I1223 00:00:42.413023  687772 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1223 00:00:42.413043  687772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1223 00:00:42.413094  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:42.416720  687772 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1223 00:00:42.418014  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1223 00:00:42.418038  687772 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1223 00:00:42.418101  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:42.435878  687772 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1223 00:00:42.435898  687772 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1223 00:00:42.435951  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:42.437317  687772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/newest-cni-348344/id_rsa Username:docker}
	I1223 00:00:42.440138  687772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/newest-cni-348344/id_rsa Username:docker}
	I1223 00:00:42.468807  687772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/newest-cni-348344/id_rsa Username:docker}
	I1223 00:00:42.547260  687772 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1223 00:00:42.600477  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1223 00:00:42.600501  687772 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1223 00:00:42.601363  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1223 00:00:42.607651  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1223 00:00:42.614184  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1223 00:00:42.614204  687772 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1223 00:00:42.628125  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1223 00:00:42.628154  687772 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1223 00:00:42.643093  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1223 00:00:42.643121  687772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1223 00:00:42.702024  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1223 00:00:42.702052  687772 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1223 00:00:42.716211  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1223 00:00:42.716238  687772 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1223 00:00:42.729547  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1223 00:00:42.729569  687772 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1223 00:00:42.742218  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1223 00:00:42.742246  687772 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1223 00:00:42.754716  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1223 00:00:42.754741  687772 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1223 00:00:42.767403  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1223 00:00:43.251127  687772 api_server.go:52] waiting for apiserver process to appear ...
	W1223 00:00:43.251190  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1223 00:00:43.251231  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:43.251243  687772 retry.go:84] will retry after 100ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:43.251206  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:00:43.251536  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:43.392258  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:00:43.445582  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:43.478824  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1223 00:00:43.509277  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:00:43.537617  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1223 00:00:43.565023  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:43.661224  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:00:43.715310  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:43.751478  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:44.004029  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:00:44.062136  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1223 00:00:40.965708  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:43.465160  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:43.498161  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:45.997960  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:00:44.088452  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:00:44.143262  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:44.251335  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:44.383985  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1223 00:00:44.396693  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:00:44.439768  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1223 00:00:44.455552  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:44.752205  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:44.902917  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:00:44.962468  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:45.251805  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:45.326701  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:00:45.400433  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:45.424647  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:00:45.480956  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:45.751310  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:45.799387  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:00:45.854148  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:46.251658  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:46.752208  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:46.836006  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:00:46.890186  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:46.995326  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:00:47.057423  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:47.251702  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:47.426474  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:00:47.485952  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:47.752131  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:48.207567  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1223 00:00:48.251315  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:00:48.262862  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:48.519836  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:00:48.578806  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:48.752112  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:00:45.465342  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:47.465739  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:47.998170  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:50.498122  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:00:49.251876  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:49.751844  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:49.890160  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:00:49.947020  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:50.066047  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:00:50.120129  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:50.251336  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:50.558241  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:00:50.615271  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:50.751572  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:51.252351  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:51.751913  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:52.251786  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:52.752327  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:52.791985  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:00:52.850108  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:53.251446  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:53.751646  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:00:49.964544  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:51.964692  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:53.964842  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:52.997659  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:54.998340  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:00:54.252244  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:54.751444  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:55.251848  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:55.347206  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:00:55.401615  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:55.751830  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:55.936027  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:00:55.995088  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:56.251432  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:56.751824  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:57.111686  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:00:57.169648  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:57.251735  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:57.751676  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:58.251279  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:58.751377  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:00:55.965091  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:58.465307  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	I1223 00:00:59.465067  679852 pod_ready.go:94] pod "coredns-66bc5c9577-v4sr7" is "Ready"
	I1223 00:00:59.465093  679852 pod_ready.go:86] duration metric: took 31.505726579s for pod "coredns-66bc5c9577-v4sr7" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:00:59.467499  679852 pod_ready.go:83] waiting for pod "etcd-kubenet-003676" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:00:59.471040  679852 pod_ready.go:94] pod "etcd-kubenet-003676" is "Ready"
	I1223 00:00:59.471063  679852 pod_ready.go:86] duration metric: took 3.544638ms for pod "etcd-kubenet-003676" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:00:59.472907  679852 pod_ready.go:83] waiting for pod "kube-apiserver-kubenet-003676" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:00:59.476385  679852 pod_ready.go:94] pod "kube-apiserver-kubenet-003676" is "Ready"
	I1223 00:00:59.476406  679852 pod_ready.go:86] duration metric: took 3.481083ms for pod "kube-apiserver-kubenet-003676" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:00:59.478385  679852 pod_ready.go:83] waiting for pod "kube-controller-manager-kubenet-003676" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:00:59.663149  679852 pod_ready.go:94] pod "kube-controller-manager-kubenet-003676" is "Ready"
	I1223 00:00:59.663178  679852 pod_ready.go:86] duration metric: took 184.769862ms for pod "kube-controller-manager-kubenet-003676" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:00:59.863586  679852 pod_ready.go:83] waiting for pod "kube-proxy-4ftjm" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:01:00.263634  679852 pod_ready.go:94] pod "kube-proxy-4ftjm" is "Ready"
	I1223 00:01:00.263661  679852 pod_ready.go:86] duration metric: took 400.030267ms for pod "kube-proxy-4ftjm" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:01:00.464316  679852 pod_ready.go:83] waiting for pod "kube-scheduler-kubenet-003676" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:01:00.863672  679852 pod_ready.go:94] pod "kube-scheduler-kubenet-003676" is "Ready"
	I1223 00:01:00.863704  679852 pod_ready.go:86] duration metric: took 399.359894ms for pod "kube-scheduler-kubenet-003676" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:01:00.863716  679852 pod_ready.go:40] duration metric: took 32.907880274s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1223 00:01:00.909769  679852 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1223 00:01:00.911549  679852 out.go:179] * Done! kubectl is now configured to use "kubenet-003676" cluster and "default" namespace by default
	W1223 00:00:57.497653  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:59.497943  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:00:59.251660  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:59.751533  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:00.252079  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:00.347519  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:01:00.405959  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:00.406017  687772 retry.go:84] will retry after 5.4s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:00.564176  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:01:00.620480  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:00.751684  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:01.251318  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:01.751824  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:02.252159  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:02.751813  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:03.251679  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:03.752247  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:01:01.997413  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:03.998103  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:06.498109  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:04.252308  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:04.751451  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:05.252117  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:05.751808  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:05.759328  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:01:05.824086  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:05.824133  687772 retry.go:84] will retry after 18.8s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:06.105622  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:01:06.162303  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:06.251478  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:06.751336  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:07.251717  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:07.751827  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:08.251532  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:08.751779  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:01:08.998157  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:11.498214  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:09.251540  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:09.503324  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:01:09.562697  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:09.562742  687772 retry.go:84] will retry after 15.6s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:09.751986  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:10.251441  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:10.751356  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:11.251389  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:11.752279  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:12.251802  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:12.751324  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:13.251818  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:13.751866  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:01:13.998099  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:16.497424  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:14.252139  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:14.751583  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:15.252310  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:15.751625  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:16.251842  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:16.751830  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:17.252084  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:17.751783  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:18.251376  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:18.751964  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:01:18.498157  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:20.998023  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:19.252110  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:19.751822  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:20.251704  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:20.568697  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:01:20.639449  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:20.639500  687772 retry.go:84] will retry after 12s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:20.751734  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:21.252249  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:21.751801  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:22.251412  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:22.751366  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:23.252197  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:23.751273  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:01:23.498226  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:25.998233  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:24.252133  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:24.590346  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:01:24.662633  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:24.662674  687772 retry.go:84] will retry after 26.5s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:24.751773  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:25.211650  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1223 00:01:25.252110  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:01:25.284581  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:25.752154  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:26.251728  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:26.751953  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:27.251838  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:27.751740  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:28.251778  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:28.752182  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:01:28.498149  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:30.998068  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:29.252321  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:29.752290  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:30.251523  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:30.752205  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:31.251957  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:31.751675  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:32.251501  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:32.610650  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:01:32.668272  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:32.668321  687772 retry.go:84] will retry after 25.8s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:32.751294  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:33.251566  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:33.751514  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:01:33.497906  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:35.997708  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:34.251717  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:34.751347  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:35.251743  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:35.751789  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:36.251397  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:36.751367  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:37.251392  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:37.751873  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:38.251848  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:38.751798  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:01:38.498282  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:40.998196  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:39.251316  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:39.752288  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:40.251339  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:40.751372  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:41.252071  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:41.752308  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:42.252071  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:42.752021  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:01:42.776761  687772 logs.go:282] 0 containers: []
	W1223 00:01:42.776791  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:01:42.776839  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:01:42.797079  687772 logs.go:282] 0 containers: []
	W1223 00:01:42.797110  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:01:42.797168  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:01:42.817812  687772 logs.go:282] 0 containers: []
	W1223 00:01:42.817839  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:01:42.817907  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:01:42.837835  687772 logs.go:282] 0 containers: []
	W1223 00:01:42.837868  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:01:42.837924  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:01:42.858165  687772 logs.go:282] 0 containers: []
	W1223 00:01:42.858188  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:01:42.858231  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:01:42.878211  687772 logs.go:282] 0 containers: []
	W1223 00:01:42.878236  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:01:42.878281  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:01:42.897739  687772 logs.go:282] 0 containers: []
	W1223 00:01:42.897762  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:01:42.897817  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:01:42.918314  687772 logs.go:282] 0 containers: []
	W1223 00:01:42.918340  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:01:42.918352  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:01:42.918362  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:01:42.965734  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:01:42.965766  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:01:42.986555  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:01:42.986585  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:01:43.052069  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:01:43.042198    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:43.042815    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:43.044868    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:43.045810    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:43.047451    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:01:43.042198    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:43.042815    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:43.044868    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:43.045810    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:43.047451    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:01:43.052092  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:01:43.052108  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:01:43.072511  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:01:43.072542  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1223 00:01:43.497482  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:45.497538  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:45.620354  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:45.631856  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:01:45.651448  687772 logs.go:282] 0 containers: []
	W1223 00:01:45.651473  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:01:45.651528  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:01:45.671204  687772 logs.go:282] 0 containers: []
	W1223 00:01:45.671229  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:01:45.671279  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:01:45.690397  687772 logs.go:282] 0 containers: []
	W1223 00:01:45.690418  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:01:45.690470  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:01:45.711316  687772 logs.go:282] 0 containers: []
	W1223 00:01:45.711342  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:01:45.711400  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:01:45.731731  687772 logs.go:282] 0 containers: []
	W1223 00:01:45.731755  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:01:45.731812  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:01:45.751415  687772 logs.go:282] 0 containers: []
	W1223 00:01:45.751442  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:01:45.751500  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:01:45.770135  687772 logs.go:282] 0 containers: []
	W1223 00:01:45.770157  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:01:45.770202  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:01:45.789088  687772 logs.go:282] 0 containers: []
	W1223 00:01:45.789114  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:01:45.789127  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:01:45.789139  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:01:45.819714  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:01:45.819741  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:01:45.867983  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:01:45.868013  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:01:45.888018  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:01:45.888051  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:01:45.945247  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:01:45.937257    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:45.937924    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:45.939479    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:45.940165    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:45.941893    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:01:45.937257    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:45.937924    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:45.939479    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:45.940165    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:45.941893    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:01:45.945270  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:01:45.945286  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:01:47.390289  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:01:47.445750  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:47.445788  687772 retry.go:84] will retry after 18.6s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:48.464795  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:48.476515  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:01:48.499002  687772 logs.go:282] 0 containers: []
	W1223 00:01:48.499028  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:01:48.499082  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:01:48.523130  687772 logs.go:282] 0 containers: []
	W1223 00:01:48.523165  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:01:48.523225  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:01:48.547882  687772 logs.go:282] 0 containers: []
	W1223 00:01:48.547904  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:01:48.547950  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:01:48.567534  687772 logs.go:282] 0 containers: []
	W1223 00:01:48.567560  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:01:48.567627  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:01:48.590197  687772 logs.go:282] 0 containers: []
	W1223 00:01:48.590222  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:01:48.590280  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:01:48.609279  687772 logs.go:282] 0 containers: []
	W1223 00:01:48.609306  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:01:48.609357  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:01:48.628700  687772 logs.go:282] 0 containers: []
	W1223 00:01:48.628731  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:01:48.628795  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:01:48.648378  687772 logs.go:282] 0 containers: []
	W1223 00:01:48.648402  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:01:48.648416  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:01:48.648433  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:01:48.672700  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:01:48.672741  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:01:48.743864  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:01:48.735937    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:48.736663    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:48.737624    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:48.739145    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:48.739683    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:01:48.735937    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:48.736663    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:48.737624    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:48.739145    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:48.739683    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:01:48.743885  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:01:48.743897  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:01:48.762911  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:01:48.762941  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:01:48.793205  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:01:48.793235  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1223 00:01:47.498181  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:49.998144  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:51.128346  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:01:51.182480  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:51.182522  687772 retry.go:84] will retry after 29.5s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:51.341781  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:51.353222  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:01:51.373421  687772 logs.go:282] 0 containers: []
	W1223 00:01:51.373447  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:01:51.373497  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:01:51.392557  687772 logs.go:282] 0 containers: []
	W1223 00:01:51.392613  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:01:51.392675  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:01:51.411939  687772 logs.go:282] 0 containers: []
	W1223 00:01:51.411964  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:01:51.412018  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:01:51.430968  687772 logs.go:282] 0 containers: []
	W1223 00:01:51.430998  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:01:51.431054  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:01:51.451234  687772 logs.go:282] 0 containers: []
	W1223 00:01:51.451266  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:01:51.451329  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:01:51.470904  687772 logs.go:282] 0 containers: []
	W1223 00:01:51.470947  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:01:51.471017  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:01:51.490121  687772 logs.go:282] 0 containers: []
	W1223 00:01:51.490146  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:01:51.490201  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:01:51.511843  687772 logs.go:282] 0 containers: []
	W1223 00:01:51.511875  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:01:51.511891  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:01:51.511906  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:01:51.545106  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:01:51.545138  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:01:51.594541  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:01:51.594576  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:01:51.615455  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:01:51.615485  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:01:51.680061  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:01:51.672760    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:51.673328    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:51.674906    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:51.675511    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:51.677002    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:01:51.672760    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:51.673328    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:51.674906    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:51.675511    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:51.677002    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:01:51.680080  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:01:51.680091  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1223 00:01:52.497841  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:54.498062  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:56.498119  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:54.199432  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:54.211004  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:01:54.230469  687772 logs.go:282] 0 containers: []
	W1223 00:01:54.230498  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:01:54.230548  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:01:54.251212  687772 logs.go:282] 0 containers: []
	W1223 00:01:54.251245  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:01:54.251300  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:01:54.274147  687772 logs.go:282] 0 containers: []
	W1223 00:01:54.274177  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:01:54.274238  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:01:54.297381  687772 logs.go:282] 0 containers: []
	W1223 00:01:54.297413  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:01:54.297471  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:01:54.316290  687772 logs.go:282] 0 containers: []
	W1223 00:01:54.316315  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:01:54.316362  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:01:54.335315  687772 logs.go:282] 0 containers: []
	W1223 00:01:54.335339  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:01:54.335393  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:01:54.354058  687772 logs.go:282] 0 containers: []
	W1223 00:01:54.354089  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:01:54.354144  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:01:54.372661  687772 logs.go:282] 0 containers: []
	W1223 00:01:54.372686  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:01:54.372700  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:01:54.372715  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:01:54.417565  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:01:54.417601  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:01:54.438013  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:01:54.438041  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:01:54.497552  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:01:54.488781    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:54.489417    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:54.491051    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:54.491532    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:54.493218    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:01:54.488781    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:54.489417    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:54.491051    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:54.491532    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:54.493218    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:01:54.497575  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:01:54.497589  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:01:54.517495  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:01:54.517523  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:01:57.056037  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:57.067478  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:01:57.087084  687772 logs.go:282] 0 containers: []
	W1223 00:01:57.087114  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:01:57.087183  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:01:57.105193  687772 logs.go:282] 0 containers: []
	W1223 00:01:57.105218  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:01:57.105270  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:01:57.122859  687772 logs.go:282] 0 containers: []
	W1223 00:01:57.122885  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:01:57.122931  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:01:57.141996  687772 logs.go:282] 0 containers: []
	W1223 00:01:57.142021  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:01:57.142074  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:01:57.160005  687772 logs.go:282] 0 containers: []
	W1223 00:01:57.160032  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:01:57.160083  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:01:57.178889  687772 logs.go:282] 0 containers: []
	W1223 00:01:57.178915  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:01:57.178989  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:01:57.196419  687772 logs.go:282] 0 containers: []
	W1223 00:01:57.196446  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:01:57.196498  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:01:57.214764  687772 logs.go:282] 0 containers: []
	W1223 00:01:57.214790  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:01:57.214804  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:01:57.214817  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:01:57.266333  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:01:57.266370  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:01:57.289301  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:01:57.289330  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:01:57.347060  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:01:57.339792    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:57.340363    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:57.341983    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:57.342452    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:57.344014    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:01:57.339792    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:57.340363    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:57.341983    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:57.342452    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:57.344014    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:01:57.347099  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:01:57.347117  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:01:57.370222  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:01:57.370257  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:01:58.466074  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:01:58.519779  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:58.519828  687772 retry.go:84] will retry after 42.4s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1223 00:01:58.498177  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:02:00.998033  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:59.898063  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:59.909410  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:01:59.927950  687772 logs.go:282] 0 containers: []
	W1223 00:01:59.927974  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:01:59.928017  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:01:59.946773  687772 logs.go:282] 0 containers: []
	W1223 00:01:59.946800  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:01:59.946861  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:01:59.964419  687772 logs.go:282] 0 containers: []
	W1223 00:01:59.964443  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:01:59.964500  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:01:59.982454  687772 logs.go:282] 0 containers: []
	W1223 00:01:59.982478  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:01:59.982537  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:00.000838  687772 logs.go:282] 0 containers: []
	W1223 00:02:00.000860  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:00.000929  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:00.018673  687772 logs.go:282] 0 containers: []
	W1223 00:02:00.018696  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:00.018747  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:00.035944  687772 logs.go:282] 0 containers: []
	W1223 00:02:00.035973  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:00.036027  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:00.053640  687772 logs.go:282] 0 containers: []
	W1223 00:02:00.053666  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:00.053679  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:00.053700  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:00.098844  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:00.098870  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:00.120198  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:00.120224  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:00.175459  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:00.168568    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:00.169147    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:00.170692    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:00.171078    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:00.172563    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:00.168568    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:00.169147    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:00.170692    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:00.171078    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:00.172563    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:00.175477  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:00.175490  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:00.194106  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:00.194146  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:02.722361  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:02.733721  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:02.753939  687772 logs.go:282] 0 containers: []
	W1223 00:02:02.753963  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:02.754025  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:02.773570  687772 logs.go:282] 0 containers: []
	W1223 00:02:02.773610  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:02.773665  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:02.793427  687772 logs.go:282] 0 containers: []
	W1223 00:02:02.793451  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:02.793514  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:02.812154  687772 logs.go:282] 0 containers: []
	W1223 00:02:02.812183  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:02.812241  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:02.830757  687772 logs.go:282] 0 containers: []
	W1223 00:02:02.830777  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:02.830819  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:02.849140  687772 logs.go:282] 0 containers: []
	W1223 00:02:02.849163  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:02.849206  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:02.867505  687772 logs.go:282] 0 containers: []
	W1223 00:02:02.867529  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:02.867584  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:02.885892  687772 logs.go:282] 0 containers: []
	W1223 00:02:02.885920  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:02.885935  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:02.885950  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:02.933880  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:02.933906  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:02.955273  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:02.955304  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:03.009924  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:03.002806    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:03.003364    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:03.004891    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:03.005360    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:03.006852    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:03.002806    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:03.003364    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:03.004891    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:03.005360    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:03.006852    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:03.009951  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:03.010012  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:03.028798  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:03.028821  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1223 00:02:03.497953  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:02:05.997506  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:02:05.557718  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:05.569769  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:05.588873  687772 logs.go:282] 0 containers: []
	W1223 00:02:05.588899  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:05.588946  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:05.607265  687772 logs.go:282] 0 containers: []
	W1223 00:02:05.607289  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:05.607342  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:05.625761  687772 logs.go:282] 0 containers: []
	W1223 00:02:05.625790  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:05.625860  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:05.643443  687772 logs.go:282] 0 containers: []
	W1223 00:02:05.643464  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:05.643513  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:05.661241  687772 logs.go:282] 0 containers: []
	W1223 00:02:05.661266  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:05.661314  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:05.679744  687772 logs.go:282] 0 containers: []
	W1223 00:02:05.679764  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:05.679805  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:05.697808  687772 logs.go:282] 0 containers: []
	W1223 00:02:05.697831  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:05.697878  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:05.716222  687772 logs.go:282] 0 containers: []
	W1223 00:02:05.716245  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:05.716255  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:05.716269  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:05.772404  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:05.772437  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:05.793610  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:05.793643  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:05.849453  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:05.842499    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:05.842938    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:05.844491    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:05.844916    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:05.846469    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:05.842499    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:05.842938    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:05.844491    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:05.844916    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:05.846469    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:05.849479  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:05.849493  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:05.868250  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:05.868273  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:05.997019  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:02:06.048845  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1223 00:02:06.048961  687772 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1223 00:02:08.398283  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:08.409794  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:08.428854  687772 logs.go:282] 0 containers: []
	W1223 00:02:08.428878  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:08.428927  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:08.447223  687772 logs.go:282] 0 containers: []
	W1223 00:02:08.447248  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:08.447292  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:08.464797  687772 logs.go:282] 0 containers: []
	W1223 00:02:08.464816  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:08.464857  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:08.483364  687772 logs.go:282] 0 containers: []
	W1223 00:02:08.483386  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:08.483449  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:08.501985  687772 logs.go:282] 0 containers: []
	W1223 00:02:08.502014  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:08.502064  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:08.520995  687772 logs.go:282] 0 containers: []
	W1223 00:02:08.521020  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:08.521071  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:08.542089  687772 logs.go:282] 0 containers: []
	W1223 00:02:08.542109  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:08.542149  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:08.560448  687772 logs.go:282] 0 containers: []
	W1223 00:02:08.560468  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:08.560478  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:08.560489  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:08.606044  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:08.606073  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:08.626314  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:08.626339  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:08.681455  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:08.674268    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:08.674839    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:08.676419    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:08.676835    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:08.678358    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:08.674268    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:08.674839    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:08.676419    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:08.676835    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:08.678358    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:08.681477  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:08.681490  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:08.699804  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:08.699830  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1223 00:02:07.998312  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:02:10.498076  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:02:11.230050  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:11.241320  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:11.260234  687772 logs.go:282] 0 containers: []
	W1223 00:02:11.260256  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:11.260301  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:11.279535  687772 logs.go:282] 0 containers: []
	W1223 00:02:11.279558  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:11.279635  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:11.297775  687772 logs.go:282] 0 containers: []
	W1223 00:02:11.297799  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:11.297843  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:11.315756  687772 logs.go:282] 0 containers: []
	W1223 00:02:11.315780  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:11.315823  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:11.333828  687772 logs.go:282] 0 containers: []
	W1223 00:02:11.333850  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:11.333894  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:11.351608  687772 logs.go:282] 0 containers: []
	W1223 00:02:11.351631  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:11.351673  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:11.371003  687772 logs.go:282] 0 containers: []
	W1223 00:02:11.371024  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:11.371065  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:11.388642  687772 logs.go:282] 0 containers: []
	W1223 00:02:11.388662  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:11.388671  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:11.388681  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:11.406664  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:11.406687  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:11.434828  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:11.434851  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:11.481940  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:11.481966  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:11.503051  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:11.503077  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:11.563912  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:11.555767    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:11.556288    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:11.558989    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:11.559407    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:11.560934    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:11.555767    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:11.556288    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:11.558989    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:11.559407    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:11.560934    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:14.064094  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:02:12.997638  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:02:15.497547  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:02:15.997757  622784 node_ready.go:38] duration metric: took 6m0.000870759s for node "no-preload-063943" to be "Ready" ...
	I1223 00:02:15.999489  622784 out.go:203] 
	W1223 00:02:16.002745  622784 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1223 00:02:16.002767  622784 out.go:285] * 
	W1223 00:02:16.002971  622784 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1223 00:02:16.004060  622784 out.go:203] 
	I1223 00:02:14.075388  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:14.094051  687772 logs.go:282] 0 containers: []
	W1223 00:02:14.094075  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:14.094123  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:14.112428  687772 logs.go:282] 0 containers: []
	W1223 00:02:14.112454  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:14.112511  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:14.130910  687772 logs.go:282] 0 containers: []
	W1223 00:02:14.130935  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:14.130991  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:14.149172  687772 logs.go:282] 0 containers: []
	W1223 00:02:14.149194  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:14.149247  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:14.167387  687772 logs.go:282] 0 containers: []
	W1223 00:02:14.167414  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:14.167470  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:14.187009  687772 logs.go:282] 0 containers: []
	W1223 00:02:14.187034  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:14.187080  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:14.205514  687772 logs.go:282] 0 containers: []
	W1223 00:02:14.205537  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:14.205604  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:14.223867  687772 logs.go:282] 0 containers: []
	W1223 00:02:14.223893  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:14.223906  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:14.223919  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:14.278850  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:14.272061    5073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:14.272519    5073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:14.274102    5073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:14.274491    5073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:14.275975    5073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:14.272061    5073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:14.272519    5073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:14.274102    5073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:14.274491    5073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:14.275975    5073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:14.278877  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:14.278904  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:14.297791  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:14.297817  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:14.329010  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:14.329035  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:14.375196  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:14.375228  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:16.895760  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:16.908501  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:16.928330  687772 logs.go:282] 0 containers: []
	W1223 00:02:16.928357  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:16.928403  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:16.947248  687772 logs.go:282] 0 containers: []
	W1223 00:02:16.947272  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:16.947319  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:16.967240  687772 logs.go:282] 0 containers: []
	W1223 00:02:16.967266  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:16.967318  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:16.986942  687772 logs.go:282] 0 containers: []
	W1223 00:02:16.986966  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:16.987025  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:17.008674  687772 logs.go:282] 0 containers: []
	W1223 00:02:17.008702  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:17.008760  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:17.030466  687772 logs.go:282] 0 containers: []
	W1223 00:02:17.030492  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:17.030548  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:17.051687  687772 logs.go:282] 0 containers: []
	W1223 00:02:17.051719  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:17.051773  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:17.073457  687772 logs.go:282] 0 containers: []
	W1223 00:02:17.073486  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:17.073502  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:17.073521  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:17.131973  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:17.132010  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:17.157397  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:17.157433  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:17.217639  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:17.209809    5249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:17.210419    5249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:17.212231    5249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:17.212725    5249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:17.214466    5249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:17.209809    5249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:17.210419    5249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:17.212231    5249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:17.212725    5249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:17.214466    5249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:17.217669  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:17.217683  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:17.239498  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:17.239530  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:19.769550  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:19.782360  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:19.802423  687772 logs.go:282] 0 containers: []
	W1223 00:02:19.802446  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:19.802497  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:19.821183  687772 logs.go:282] 0 containers: []
	W1223 00:02:19.821214  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:19.821269  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:19.840343  687772 logs.go:282] 0 containers: []
	W1223 00:02:19.840369  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:19.840426  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:19.857810  687772 logs.go:282] 0 containers: []
	W1223 00:02:19.857835  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:19.857878  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:19.875458  687772 logs.go:282] 0 containers: []
	W1223 00:02:19.875481  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:19.875523  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:19.893840  687772 logs.go:282] 0 containers: []
	W1223 00:02:19.893864  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:19.893916  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:19.912030  687772 logs.go:282] 0 containers: []
	W1223 00:02:19.912053  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:19.912094  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:19.930049  687772 logs.go:282] 0 containers: []
	W1223 00:02:19.930066  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:19.930077  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:19.930088  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:19.976279  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:19.976304  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:19.995814  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:19.995837  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:20.054797  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:20.046679    5408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:20.047269    5408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:20.048969    5408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:20.049570    5408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:20.051329    5408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:20.046679    5408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:20.047269    5408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:20.048969    5408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:20.049570    5408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:20.051329    5408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:20.054819  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:20.054833  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:20.074562  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:20.074588  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:20.651032  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:02:20.702678  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1223 00:02:20.702795  687772 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1223 00:02:22.602868  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:22.614420  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:22.633871  687772 logs.go:282] 0 containers: []
	W1223 00:02:22.633892  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:22.633942  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:22.652376  687772 logs.go:282] 0 containers: []
	W1223 00:02:22.652403  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:22.652454  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:22.670318  687772 logs.go:282] 0 containers: []
	W1223 00:02:22.670340  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:22.670384  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:22.688893  687772 logs.go:282] 0 containers: []
	W1223 00:02:22.688913  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:22.688966  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:22.707579  687772 logs.go:282] 0 containers: []
	W1223 00:02:22.707614  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:22.707667  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:22.726147  687772 logs.go:282] 0 containers: []
	W1223 00:02:22.726174  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:22.726230  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:22.744895  687772 logs.go:282] 0 containers: []
	W1223 00:02:22.744919  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:22.744975  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:22.765807  687772 logs.go:282] 0 containers: []
	W1223 00:02:22.765834  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:22.765848  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:22.765858  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:22.786075  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:22.786111  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:22.814010  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:22.814034  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:22.859717  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:22.859741  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:22.878865  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:22.878889  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:22.933790  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:22.926530    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:22.927059    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:22.928672    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:22.929122    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:22.930663    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:22.926530    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:22.927059    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:22.928672    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:22.929122    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:22.930663    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:25.434500  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:25.446396  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:25.466157  687772 logs.go:282] 0 containers: []
	W1223 00:02:25.466184  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:25.466237  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:25.484799  687772 logs.go:282] 0 containers: []
	W1223 00:02:25.484827  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:25.484899  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:25.503442  687772 logs.go:282] 0 containers: []
	W1223 00:02:25.503470  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:25.503516  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:25.522088  687772 logs.go:282] 0 containers: []
	W1223 00:02:25.522114  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:25.522174  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:25.540899  687772 logs.go:282] 0 containers: []
	W1223 00:02:25.540924  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:25.540979  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:25.559853  687772 logs.go:282] 0 containers: []
	W1223 00:02:25.559877  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:25.559929  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:25.578537  687772 logs.go:282] 0 containers: []
	W1223 00:02:25.578560  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:25.578619  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:25.597442  687772 logs.go:282] 0 containers: []
	W1223 00:02:25.597465  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:25.597476  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:25.597491  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:25.617688  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:25.617718  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:25.672737  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:25.665784    5752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:25.666239    5752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:25.667786    5752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:25.668269    5752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:25.669751    5752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:25.665784    5752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:25.666239    5752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:25.667786    5752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:25.668269    5752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:25.669751    5752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:25.672761  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:25.672777  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:25.691559  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:25.691585  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:25.719893  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:25.719918  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:28.271777  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:28.284248  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:28.304042  687772 logs.go:282] 0 containers: []
	W1223 00:02:28.304069  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:28.304126  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:28.322682  687772 logs.go:282] 0 containers: []
	W1223 00:02:28.322711  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:28.322769  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:28.340899  687772 logs.go:282] 0 containers: []
	W1223 00:02:28.340925  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:28.340974  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:28.359896  687772 logs.go:282] 0 containers: []
	W1223 00:02:28.359922  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:28.359976  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:28.378627  687772 logs.go:282] 0 containers: []
	W1223 00:02:28.378650  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:28.378700  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:28.396793  687772 logs.go:282] 0 containers: []
	W1223 00:02:28.396821  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:28.396870  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:28.415408  687772 logs.go:282] 0 containers: []
	W1223 00:02:28.415434  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:28.415480  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:28.434108  687772 logs.go:282] 0 containers: []
	W1223 00:02:28.434131  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:28.434142  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:28.434153  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:28.462377  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:28.462405  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:28.509046  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:28.509080  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:28.531034  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:28.531065  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:28.587866  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:28.580703    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:28.581207    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:28.582777    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:28.583174    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:28.584683    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:28.580703    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:28.581207    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:28.582777    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:28.583174    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:28.584683    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:28.587904  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:28.587920  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:31.109730  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:31.121215  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:31.140775  687772 logs.go:282] 0 containers: []
	W1223 00:02:31.140799  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:31.140853  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:31.160694  687772 logs.go:282] 0 containers: []
	W1223 00:02:31.160719  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:31.160766  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:31.180064  687772 logs.go:282] 0 containers: []
	W1223 00:02:31.180087  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:31.180133  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:31.198777  687772 logs.go:282] 0 containers: []
	W1223 00:02:31.198802  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:31.198856  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:31.217848  687772 logs.go:282] 0 containers: []
	W1223 00:02:31.217875  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:31.217923  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:31.237167  687772 logs.go:282] 0 containers: []
	W1223 00:02:31.237196  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:31.237251  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:31.257964  687772 logs.go:282] 0 containers: []
	W1223 00:02:31.257995  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:31.258056  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:31.279556  687772 logs.go:282] 0 containers: []
	W1223 00:02:31.279581  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:31.279607  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:31.279624  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:31.336644  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:31.329447    6086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:31.329953    6086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:31.331523    6086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:31.332012    6086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:31.333520    6086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:31.329447    6086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:31.329953    6086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:31.331523    6086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:31.332012    6086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:31.333520    6086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:31.336664  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:31.336675  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:31.355102  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:31.355129  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:31.384063  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:31.384096  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:31.429299  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:31.429337  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:33.951226  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:33.962558  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:33.981280  687772 logs.go:282] 0 containers: []
	W1223 00:02:33.981301  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:33.981353  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:34.000326  687772 logs.go:282] 0 containers: []
	W1223 00:02:34.000351  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:34.000417  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:34.020043  687772 logs.go:282] 0 containers: []
	W1223 00:02:34.020069  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:34.020114  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:34.042279  687772 logs.go:282] 0 containers: []
	W1223 00:02:34.042304  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:34.042363  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:34.060550  687772 logs.go:282] 0 containers: []
	W1223 00:02:34.060571  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:34.060631  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:34.078917  687772 logs.go:282] 0 containers: []
	W1223 00:02:34.078939  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:34.078986  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:34.098151  687772 logs.go:282] 0 containers: []
	W1223 00:02:34.098177  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:34.098224  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:34.117100  687772 logs.go:282] 0 containers: []
	W1223 00:02:34.117124  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:34.117137  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:34.117153  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:34.138330  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:34.138358  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:34.193562  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:34.186453    6246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:34.187024    6246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:34.188567    6246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:34.188984    6246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:34.190534    6246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:34.186453    6246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:34.187024    6246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:34.188567    6246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:34.188984    6246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:34.190534    6246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:34.193588  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:34.193615  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:34.212264  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:34.212288  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:34.240368  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:34.240399  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:36.793206  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:36.804783  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:36.823535  687772 logs.go:282] 0 containers: []
	W1223 00:02:36.823556  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:36.823618  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:36.841856  687772 logs.go:282] 0 containers: []
	W1223 00:02:36.841879  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:36.841933  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:36.860292  687772 logs.go:282] 0 containers: []
	W1223 00:02:36.860319  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:36.860360  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:36.878691  687772 logs.go:282] 0 containers: []
	W1223 00:02:36.878719  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:36.878773  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:36.897448  687772 logs.go:282] 0 containers: []
	W1223 00:02:36.897472  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:36.897519  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:36.916562  687772 logs.go:282] 0 containers: []
	W1223 00:02:36.916585  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:36.916654  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:36.934784  687772 logs.go:282] 0 containers: []
	W1223 00:02:36.934807  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:36.934865  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:36.953285  687772 logs.go:282] 0 containers: []
	W1223 00:02:36.953305  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:36.953317  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:36.953328  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:37.000978  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:37.001008  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:37.021185  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:37.021217  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:37.081314  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:37.074072    6419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:37.074651    6419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:37.076251    6419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:37.076693    6419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:37.078295    6419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:37.074072    6419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:37.074651    6419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:37.076251    6419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:37.076693    6419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:37.078295    6419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:37.081345  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:37.081366  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:37.100453  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:37.100480  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:39.629693  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:39.641060  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:39.660163  687772 logs.go:282] 0 containers: []
	W1223 00:02:39.660187  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:39.660232  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:39.680357  687772 logs.go:282] 0 containers: []
	W1223 00:02:39.680379  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:39.680422  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:39.699821  687772 logs.go:282] 0 containers: []
	W1223 00:02:39.699853  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:39.699916  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:39.719383  687772 logs.go:282] 0 containers: []
	W1223 00:02:39.719407  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:39.719460  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:39.739699  687772 logs.go:282] 0 containers: []
	W1223 00:02:39.739726  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:39.739800  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:39.758766  687772 logs.go:282] 0 containers: []
	W1223 00:02:39.758791  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:39.758849  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:39.777656  687772 logs.go:282] 0 containers: []
	W1223 00:02:39.777690  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:39.777752  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:39.796962  687772 logs.go:282] 0 containers: []
	W1223 00:02:39.796984  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:39.796995  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:39.797006  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:39.842320  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:39.842347  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:39.862054  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:39.862080  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:39.916930  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:39.910012    6584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:39.910586    6584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:39.912148    6584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:39.912544    6584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:39.913971    6584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:39.910012    6584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:39.910586    6584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:39.912148    6584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:39.912544    6584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:39.913971    6584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:39.916953  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:39.916970  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:39.935277  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:39.935306  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:40.946301  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:02:41.000005  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1223 00:02:41.000109  687772 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1223 00:02:41.001884  687772 out.go:179] * Enabled addons: 
	I1223 00:02:41.002846  687772 addons.go:530] duration metric: took 1m58.614813363s for enable addons: enabled=[]
	I1223 00:02:42.463498  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:42.474861  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:42.493733  687772 logs.go:282] 0 containers: []
	W1223 00:02:42.493756  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:42.493806  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:42.513344  687772 logs.go:282] 0 containers: []
	W1223 00:02:42.513376  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:42.513436  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:42.537617  687772 logs.go:282] 0 containers: []
	W1223 00:02:42.537647  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:42.537701  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:42.557673  687772 logs.go:282] 0 containers: []
	W1223 00:02:42.557698  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:42.557746  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:42.576567  687772 logs.go:282] 0 containers: []
	W1223 00:02:42.576604  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:42.576669  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:42.595813  687772 logs.go:282] 0 containers: []
	W1223 00:02:42.595836  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:42.595890  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:42.615074  687772 logs.go:282] 0 containers: []
	W1223 00:02:42.615101  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:42.615154  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:42.634655  687772 logs.go:282] 0 containers: []
	W1223 00:02:42.634685  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:42.634702  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:42.634719  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:42.654826  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:42.654852  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:42.710552  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:42.703396    6762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:42.703921    6762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:42.705447    6762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:42.705896    6762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:42.707365    6762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:42.703396    6762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:42.703921    6762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:42.705447    6762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:42.705896    6762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:42.707365    6762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:42.710573  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:42.710585  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:42.729412  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:42.729439  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:42.758163  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:42.758187  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:45.306682  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:45.318226  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:45.337265  687772 logs.go:282] 0 containers: []
	W1223 00:02:45.337287  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:45.337343  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:45.355924  687772 logs.go:282] 0 containers: []
	W1223 00:02:45.355945  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:45.355990  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:45.374282  687772 logs.go:282] 0 containers: []
	W1223 00:02:45.374303  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:45.374348  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:45.394500  687772 logs.go:282] 0 containers: []
	W1223 00:02:45.394533  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:45.394584  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:45.412466  687772 logs.go:282] 0 containers: []
	W1223 00:02:45.412489  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:45.412538  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:45.431148  687772 logs.go:282] 0 containers: []
	W1223 00:02:45.431185  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:45.431234  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:45.450281  687772 logs.go:282] 0 containers: []
	W1223 00:02:45.450303  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:45.450352  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:45.468758  687772 logs.go:282] 0 containers: []
	W1223 00:02:45.468787  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:45.468804  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:45.468818  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:45.520708  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:45.520742  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:45.542983  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:45.543013  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:45.598778  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:45.591672    6934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:45.592201    6934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:45.593887    6934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:45.594424    6934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:45.595931    6934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:45.591672    6934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:45.592201    6934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:45.593887    6934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:45.594424    6934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:45.595931    6934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:45.598798  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:45.598812  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:45.617903  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:45.617931  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:48.156370  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:48.167842  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:48.187202  687772 logs.go:282] 0 containers: []
	W1223 00:02:48.187224  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:48.187268  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:48.206448  687772 logs.go:282] 0 containers: []
	W1223 00:02:48.206471  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:48.206516  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:48.225302  687772 logs.go:282] 0 containers: []
	W1223 00:02:48.225322  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:48.225373  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:48.244155  687772 logs.go:282] 0 containers: []
	W1223 00:02:48.244185  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:48.244245  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:48.264312  687772 logs.go:282] 0 containers: []
	W1223 00:02:48.264350  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:48.264418  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:48.284233  687772 logs.go:282] 0 containers: []
	W1223 00:02:48.284260  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:48.284317  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:48.303899  687772 logs.go:282] 0 containers: []
	W1223 00:02:48.303924  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:48.303973  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:48.324302  687772 logs.go:282] 0 containers: []
	W1223 00:02:48.324335  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:48.324350  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:48.324366  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:48.345435  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:48.345463  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:48.402949  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:48.395302    7094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:48.395834    7094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:48.397506    7094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:48.398024    7094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:48.399634    7094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:48.395302    7094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:48.395834    7094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:48.397506    7094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:48.398024    7094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:48.399634    7094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:48.402972  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:48.402984  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:48.423927  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:48.423954  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:48.452771  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:48.452799  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:51.001239  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:51.013175  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:51.032822  687772 logs.go:282] 0 containers: []
	W1223 00:02:51.032846  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:51.032898  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:51.051652  687772 logs.go:282] 0 containers: []
	W1223 00:02:51.051682  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:51.051724  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:51.070373  687772 logs.go:282] 0 containers: []
	W1223 00:02:51.070395  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:51.070448  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:51.088655  687772 logs.go:282] 0 containers: []
	W1223 00:02:51.088676  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:51.088732  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:51.108004  687772 logs.go:282] 0 containers: []
	W1223 00:02:51.108025  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:51.108078  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:51.126636  687772 logs.go:282] 0 containers: []
	W1223 00:02:51.126662  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:51.126728  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:51.145355  687772 logs.go:282] 0 containers: []
	W1223 00:02:51.145385  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:51.145451  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:51.164355  687772 logs.go:282] 0 containers: []
	W1223 00:02:51.164384  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:51.164396  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:51.164409  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:51.191698  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:51.191724  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:51.238383  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:51.238411  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:51.260545  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:51.260580  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:51.318147  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:51.310651    7281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:51.311192    7281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:51.312772    7281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:51.313264    7281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:51.314877    7281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:51.310651    7281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:51.311192    7281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:51.312772    7281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:51.313264    7281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:51.314877    7281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:51.318168  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:51.318182  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:53.838848  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:53.850007  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:53.868584  687772 logs.go:282] 0 containers: []
	W1223 00:02:53.868622  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:53.868663  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:53.887617  687772 logs.go:282] 0 containers: []
	W1223 00:02:53.887640  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:53.887687  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:53.906384  687772 logs.go:282] 0 containers: []
	W1223 00:02:53.906409  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:53.906453  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:53.924912  687772 logs.go:282] 0 containers: []
	W1223 00:02:53.924938  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:53.924988  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:53.943400  687772 logs.go:282] 0 containers: []
	W1223 00:02:53.943425  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:53.943477  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:53.961941  687772 logs.go:282] 0 containers: []
	W1223 00:02:53.961969  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:53.962024  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:53.980915  687772 logs.go:282] 0 containers: []
	W1223 00:02:53.980941  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:53.980987  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:53.998798  687772 logs.go:282] 0 containers: []
	W1223 00:02:53.998817  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:53.998827  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:53.998839  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:54.017064  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:54.017089  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:54.045091  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:54.045114  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:54.090278  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:54.090307  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:54.111890  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:54.111920  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:54.166797  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:54.159777    7452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:54.160366    7452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:54.161942    7452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:54.162343    7452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:54.163852    7452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:54.159777    7452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:54.160366    7452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:54.161942    7452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:54.162343    7452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:54.163852    7452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:56.668571  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:56.680147  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:56.699018  687772 logs.go:282] 0 containers: []
	W1223 00:02:56.699042  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:56.699093  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:56.716996  687772 logs.go:282] 0 containers: []
	W1223 00:02:56.717019  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:56.717068  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:56.735529  687772 logs.go:282] 0 containers: []
	W1223 00:02:56.735565  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:56.735644  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:56.756677  687772 logs.go:282] 0 containers: []
	W1223 00:02:56.756701  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:56.756757  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:56.777819  687772 logs.go:282] 0 containers: []
	W1223 00:02:56.777850  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:56.777905  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:56.799967  687772 logs.go:282] 0 containers: []
	W1223 00:02:56.799997  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:56.800054  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:56.818811  687772 logs.go:282] 0 containers: []
	W1223 00:02:56.818836  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:56.818881  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:56.837426  687772 logs.go:282] 0 containers: []
	W1223 00:02:56.837461  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:56.837473  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:56.837487  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:56.893850  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:56.886682    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:56.887224    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:56.888853    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:56.889345    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:56.890925    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:56.886682    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:56.887224    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:56.888853    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:56.889345    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:56.890925    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:56.893879  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:56.893894  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:56.912125  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:56.912151  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:56.939250  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:56.939279  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:56.986566  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:56.986599  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:59.506330  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:59.518294  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:59.540502  687772 logs.go:282] 0 containers: []
	W1223 00:02:59.540529  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:59.540586  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:59.559288  687772 logs.go:282] 0 containers: []
	W1223 00:02:59.559322  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:59.559372  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:59.577919  687772 logs.go:282] 0 containers: []
	W1223 00:02:59.577945  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:59.578002  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:59.596632  687772 logs.go:282] 0 containers: []
	W1223 00:02:59.596655  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:59.596705  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:59.614750  687772 logs.go:282] 0 containers: []
	W1223 00:02:59.614775  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:59.614826  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:59.632989  687772 logs.go:282] 0 containers: []
	W1223 00:02:59.633007  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:59.633057  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:59.650953  687772 logs.go:282] 0 containers: []
	W1223 00:02:59.650972  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:59.651020  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:59.669171  687772 logs.go:282] 0 containers: []
	W1223 00:02:59.669190  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:59.669202  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:59.669214  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:59.713997  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:59.714026  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:59.733682  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:59.733709  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:59.801000  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:59.792717    7761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:59.793343    7761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:59.796120    7761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:59.796686    7761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:59.797955    7761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:59.792717    7761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:59.793343    7761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:59.796120    7761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:59.796686    7761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:59.797955    7761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:59.801018  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:59.801029  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:59.819988  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:59.820018  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:02.350019  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:02.361484  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:02.380765  687772 logs.go:282] 0 containers: []
	W1223 00:03:02.380793  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:02.380841  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:02.398822  687772 logs.go:282] 0 containers: []
	W1223 00:03:02.398847  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:02.398892  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:02.416468  687772 logs.go:282] 0 containers: []
	W1223 00:03:02.416488  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:02.416530  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:02.435155  687772 logs.go:282] 0 containers: []
	W1223 00:03:02.435182  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:02.435237  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:02.453935  687772 logs.go:282] 0 containers: []
	W1223 00:03:02.453961  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:02.454012  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:02.472347  687772 logs.go:282] 0 containers: []
	W1223 00:03:02.472376  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:02.472445  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:02.490480  687772 logs.go:282] 0 containers: []
	W1223 00:03:02.490505  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:02.490562  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:02.510458  687772 logs.go:282] 0 containers: []
	W1223 00:03:02.510485  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:02.510498  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:02.510509  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:02.541744  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:02.541769  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:02.587578  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:02.587619  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:02.607135  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:02.607161  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:02.663082  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:02.655098    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:02.655704    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:02.657931    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:02.658437    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:02.660015    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:02.655098    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:02.655704    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:02.657931    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:02.658437    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:02.660015    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:02.663104  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:02.663117  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:05.182740  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:05.194033  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:05.212783  687772 logs.go:282] 0 containers: []
	W1223 00:03:05.212809  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:05.212868  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:05.230615  687772 logs.go:282] 0 containers: []
	W1223 00:03:05.230643  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:05.230687  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:05.249068  687772 logs.go:282] 0 containers: []
	W1223 00:03:05.249091  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:05.249140  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:05.268884  687772 logs.go:282] 0 containers: []
	W1223 00:03:05.268913  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:05.268965  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:05.288077  687772 logs.go:282] 0 containers: []
	W1223 00:03:05.288103  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:05.288159  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:05.306886  687772 logs.go:282] 0 containers: []
	W1223 00:03:05.306916  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:05.306970  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:05.325552  687772 logs.go:282] 0 containers: []
	W1223 00:03:05.325579  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:05.325644  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:05.344222  687772 logs.go:282] 0 containers: []
	W1223 00:03:05.344252  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:05.344264  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:05.344276  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:05.389222  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:05.389252  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:05.409357  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:05.409384  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:05.466244  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:05.459201    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:05.459757    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:05.461346    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:05.461781    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:05.463277    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:05.459201    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:05.459757    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:05.461346    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:05.461781    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:05.463277    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:05.466269  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:05.466285  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:05.484803  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:05.484830  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:08.013719  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:08.026534  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:08.046545  687772 logs.go:282] 0 containers: []
	W1223 00:03:08.046567  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:08.046633  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:08.065353  687772 logs.go:282] 0 containers: []
	W1223 00:03:08.065375  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:08.065423  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:08.084081  687772 logs.go:282] 0 containers: []
	W1223 00:03:08.084109  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:08.084156  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:08.102488  687772 logs.go:282] 0 containers: []
	W1223 00:03:08.102514  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:08.102570  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:08.121317  687772 logs.go:282] 0 containers: []
	W1223 00:03:08.121347  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:08.121391  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:08.139209  687772 logs.go:282] 0 containers: []
	W1223 00:03:08.139232  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:08.139282  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:08.157445  687772 logs.go:282] 0 containers: []
	W1223 00:03:08.157465  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:08.157510  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:08.177073  687772 logs.go:282] 0 containers: []
	W1223 00:03:08.177101  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:08.177115  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:08.177131  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:08.195188  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:08.195222  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:08.223256  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:08.223282  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:08.270668  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:08.270696  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:08.290331  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:08.290355  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:08.344801  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:08.337927    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:08.338445    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:08.340039    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:08.340476    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:08.341971    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:08.337927    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:08.338445    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:08.340039    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:08.340476    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:08.341971    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:10.846497  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:10.857798  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:10.876797  687772 logs.go:282] 0 containers: []
	W1223 00:03:10.876818  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:10.876863  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:10.895838  687772 logs.go:282] 0 containers: []
	W1223 00:03:10.895862  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:10.895907  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:10.913971  687772 logs.go:282] 0 containers: []
	W1223 00:03:10.913996  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:10.914038  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:10.932422  687772 logs.go:282] 0 containers: []
	W1223 00:03:10.932449  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:10.932501  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:10.951013  687772 logs.go:282] 0 containers: []
	W1223 00:03:10.951034  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:10.951076  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:10.969170  687772 logs.go:282] 0 containers: []
	W1223 00:03:10.969198  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:10.969242  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:10.988274  687772 logs.go:282] 0 containers: []
	W1223 00:03:10.988332  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:10.988382  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:11.006849  687772 logs.go:282] 0 containers: []
	W1223 00:03:11.006875  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:11.006889  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:11.006906  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:11.059569  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:11.059619  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:11.079808  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:11.079835  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:11.134768  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:11.127709    8435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:11.128312    8435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:11.129883    8435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:11.130324    8435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:11.131860    8435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:11.127709    8435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:11.128312    8435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:11.129883    8435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:11.130324    8435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:11.131860    8435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:11.134794  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:11.134817  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:11.153181  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:11.153207  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:13.681510  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:13.692957  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:13.711987  687772 logs.go:282] 0 containers: []
	W1223 00:03:13.712017  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:13.712069  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:13.730999  687772 logs.go:282] 0 containers: []
	W1223 00:03:13.731026  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:13.731083  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:13.753677  687772 logs.go:282] 0 containers: []
	W1223 00:03:13.753709  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:13.753769  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:13.779299  687772 logs.go:282] 0 containers: []
	W1223 00:03:13.779328  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:13.779389  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:13.800195  687772 logs.go:282] 0 containers: []
	W1223 00:03:13.800223  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:13.800269  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:13.818836  687772 logs.go:282] 0 containers: []
	W1223 00:03:13.818861  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:13.818905  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:13.837265  687772 logs.go:282] 0 containers: []
	W1223 00:03:13.837293  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:13.837349  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:13.855911  687772 logs.go:282] 0 containers: []
	W1223 00:03:13.855934  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:13.855944  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:13.855963  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:13.877413  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:13.877442  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:13.932902  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:13.925709    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:13.926137    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:13.927723    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:13.928122    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:13.929665    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:13.925709    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:13.926137    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:13.927723    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:13.928122    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:13.929665    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:13.932922  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:13.932935  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:13.951430  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:13.951455  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:13.979434  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:13.979463  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:16.528395  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:16.539658  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:16.558721  687772 logs.go:282] 0 containers: []
	W1223 00:03:16.558746  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:16.558802  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:16.577097  687772 logs.go:282] 0 containers: []
	W1223 00:03:16.577122  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:16.577169  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:16.594944  687772 logs.go:282] 0 containers: []
	W1223 00:03:16.594973  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:16.595021  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:16.612956  687772 logs.go:282] 0 containers: []
	W1223 00:03:16.612982  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:16.613028  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:16.631601  687772 logs.go:282] 0 containers: []
	W1223 00:03:16.631626  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:16.631689  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:16.650054  687772 logs.go:282] 0 containers: []
	W1223 00:03:16.650077  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:16.650125  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:16.668847  687772 logs.go:282] 0 containers: []
	W1223 00:03:16.668868  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:16.668912  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:16.686862  687772 logs.go:282] 0 containers: []
	W1223 00:03:16.686892  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:16.686906  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:16.686923  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:16.743145  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:16.736059    8755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:16.736637    8755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:16.738192    8755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:16.738624    8755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:16.740098    8755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:16.736059    8755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:16.736637    8755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:16.738192    8755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:16.738624    8755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:16.740098    8755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:16.743166  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:16.743178  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:16.762565  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:16.762607  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:16.794528  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:16.794556  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:16.840343  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:16.840372  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:19.362509  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:19.374211  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:19.393192  687772 logs.go:282] 0 containers: []
	W1223 00:03:19.393216  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:19.393268  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:19.412437  687772 logs.go:282] 0 containers: []
	W1223 00:03:19.412465  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:19.412523  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:19.432373  687772 logs.go:282] 0 containers: []
	W1223 00:03:19.432401  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:19.432460  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:19.452125  687772 logs.go:282] 0 containers: []
	W1223 00:03:19.452159  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:19.452217  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:19.471301  687772 logs.go:282] 0 containers: []
	W1223 00:03:19.471328  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:19.471374  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:19.490544  687772 logs.go:282] 0 containers: []
	W1223 00:03:19.490571  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:19.490643  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:19.510487  687772 logs.go:282] 0 containers: []
	W1223 00:03:19.510508  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:19.510559  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:19.529060  687772 logs.go:282] 0 containers: []
	W1223 00:03:19.529084  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:19.529097  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:19.529112  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:19.574443  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:19.574473  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:19.594488  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:19.594517  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:19.649890  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:19.642446    8930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:19.643046    8930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:19.644619    8930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:19.645108    8930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:19.646648    8930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:19.642446    8930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:19.643046    8930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:19.644619    8930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:19.645108    8930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:19.646648    8930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:19.649910  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:19.649923  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:19.668626  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:19.668651  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:22.198480  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:22.210881  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:22.230438  687772 logs.go:282] 0 containers: []
	W1223 00:03:22.230462  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:22.230522  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:22.248861  687772 logs.go:282] 0 containers: []
	W1223 00:03:22.248882  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:22.248922  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:22.268466  687772 logs.go:282] 0 containers: []
	W1223 00:03:22.268499  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:22.268557  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:22.289199  687772 logs.go:282] 0 containers: []
	W1223 00:03:22.289223  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:22.289268  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:22.307380  687772 logs.go:282] 0 containers: []
	W1223 00:03:22.307405  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:22.307470  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:22.324678  687772 logs.go:282] 0 containers: []
	W1223 00:03:22.324704  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:22.324763  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:22.343704  687772 logs.go:282] 0 containers: []
	W1223 00:03:22.343736  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:22.343791  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:22.362087  687772 logs.go:282] 0 containers: []
	W1223 00:03:22.362117  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:22.362137  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:22.362150  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:22.409818  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:22.409877  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:22.430134  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:22.430165  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:22.485643  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:22.478699    9095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:22.479219    9095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:22.480800    9095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:22.481236    9095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:22.482717    9095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:22.478699    9095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:22.479219    9095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:22.480800    9095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:22.481236    9095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:22.482717    9095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:22.485664  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:22.485680  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:22.504121  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:22.504150  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:25.031881  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:25.043513  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:25.063145  687772 logs.go:282] 0 containers: []
	W1223 00:03:25.063167  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:25.063211  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:25.082000  687772 logs.go:282] 0 containers: []
	W1223 00:03:25.082025  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:25.082074  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:25.099962  687772 logs.go:282] 0 containers: []
	W1223 00:03:25.099984  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:25.100038  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:25.118454  687772 logs.go:282] 0 containers: []
	W1223 00:03:25.118479  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:25.118537  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:25.136993  687772 logs.go:282] 0 containers: []
	W1223 00:03:25.137020  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:25.137069  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:25.155902  687772 logs.go:282] 0 containers: []
	W1223 00:03:25.155925  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:25.155974  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:25.175659  687772 logs.go:282] 0 containers: []
	W1223 00:03:25.175683  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:25.175737  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:25.194139  687772 logs.go:282] 0 containers: []
	W1223 00:03:25.194167  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:25.194180  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:25.194193  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:25.240226  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:25.240258  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:25.261339  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:25.261367  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:25.320736  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:25.313498    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:25.314072    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:25.315614    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:25.316005    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:25.317501    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:25.313498    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:25.314072    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:25.315614    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:25.316005    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:25.317501    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:25.320756  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:25.320768  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:25.341035  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:25.341064  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:27.870845  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:27.882071  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:27.901298  687772 logs.go:282] 0 containers: []
	W1223 00:03:27.901323  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:27.901382  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:27.919859  687772 logs.go:282] 0 containers: []
	W1223 00:03:27.919880  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:27.919930  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:27.938496  687772 logs.go:282] 0 containers: []
	W1223 00:03:27.938520  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:27.938563  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:27.956888  687772 logs.go:282] 0 containers: []
	W1223 00:03:27.956916  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:27.956972  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:27.975342  687772 logs.go:282] 0 containers: []
	W1223 00:03:27.975362  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:27.975412  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:27.994015  687772 logs.go:282] 0 containers: []
	W1223 00:03:27.994038  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:27.994082  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:28.013037  687772 logs.go:282] 0 containers: []
	W1223 00:03:28.013065  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:28.013125  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:28.033210  687772 logs.go:282] 0 containers: []
	W1223 00:03:28.033234  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:28.033247  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:28.033262  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:28.078861  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:28.078892  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:28.098865  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:28.098890  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:28.154165  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:28.147100    9422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:28.147650    9422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:28.149204    9422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:28.149685    9422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:28.151156    9422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:28.147100    9422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:28.147650    9422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:28.149204    9422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:28.149685    9422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:28.151156    9422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:28.154185  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:28.154197  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:28.172425  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:28.172454  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:30.702937  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:30.714537  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:30.735323  687772 logs.go:282] 0 containers: []
	W1223 00:03:30.735346  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:30.735411  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:30.754342  687772 logs.go:282] 0 containers: []
	W1223 00:03:30.754364  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:30.754416  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:30.773486  687772 logs.go:282] 0 containers: []
	W1223 00:03:30.773513  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:30.773570  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:30.792473  687772 logs.go:282] 0 containers: []
	W1223 00:03:30.792498  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:30.792554  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:30.810955  687772 logs.go:282] 0 containers: []
	W1223 00:03:30.810981  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:30.811028  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:30.829795  687772 logs.go:282] 0 containers: []
	W1223 00:03:30.829816  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:30.829864  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:30.848939  687772 logs.go:282] 0 containers: []
	W1223 00:03:30.848959  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:30.849000  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:30.867397  687772 logs.go:282] 0 containers: []
	W1223 00:03:30.867423  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:30.867435  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:30.867452  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:30.887088  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:30.887116  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:30.942084  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:30.934885    9589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:30.935453    9589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:30.937022    9589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:30.937466    9589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:30.938978    9589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:30.934885    9589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:30.935453    9589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:30.937022    9589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:30.937466    9589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:30.938978    9589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:30.942116  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:30.942130  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:30.960703  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:30.960730  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:30.988334  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:30.988359  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:33.539710  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:33.551147  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:33.569876  687772 logs.go:282] 0 containers: []
	W1223 00:03:33.569899  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:33.569943  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:33.588678  687772 logs.go:282] 0 containers: []
	W1223 00:03:33.588710  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:33.588766  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:33.607229  687772 logs.go:282] 0 containers: []
	W1223 00:03:33.607251  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:33.607302  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:33.625442  687772 logs.go:282] 0 containers: []
	W1223 00:03:33.625466  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:33.625527  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:33.644308  687772 logs.go:282] 0 containers: []
	W1223 00:03:33.644340  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:33.644396  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:33.662684  687772 logs.go:282] 0 containers: []
	W1223 00:03:33.662717  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:33.662786  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:33.681135  687772 logs.go:282] 0 containers: []
	W1223 00:03:33.681161  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:33.681209  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:33.700016  687772 logs.go:282] 0 containers: []
	W1223 00:03:33.700042  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:33.700057  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:33.700070  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:33.718957  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:33.718985  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:33.747390  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:33.747417  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:33.793693  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:33.793722  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:33.815051  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:33.815076  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:33.869709  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:33.862833    9773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:33.863339    9773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:33.864945    9773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:33.865374    9773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:33.866900    9773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:33.862833    9773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:33.863339    9773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:33.864945    9773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:33.865374    9773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:33.866900    9773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:36.371365  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:36.383229  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:36.403744  687772 logs.go:282] 0 containers: []
	W1223 00:03:36.403771  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:36.403818  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:36.422087  687772 logs.go:282] 0 containers: []
	W1223 00:03:36.422109  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:36.422163  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:36.440967  687772 logs.go:282] 0 containers: []
	W1223 00:03:36.440989  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:36.441046  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:36.459110  687772 logs.go:282] 0 containers: []
	W1223 00:03:36.459137  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:36.459184  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:36.477754  687772 logs.go:282] 0 containers: []
	W1223 00:03:36.477781  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:36.477838  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:36.496775  687772 logs.go:282] 0 containers: []
	W1223 00:03:36.496803  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:36.496857  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:36.516542  687772 logs.go:282] 0 containers: []
	W1223 00:03:36.516577  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:36.516652  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:36.537692  687772 logs.go:282] 0 containers: []
	W1223 00:03:36.537720  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:36.537731  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:36.537744  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:36.585346  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:36.585376  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:36.605519  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:36.605545  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:36.660230  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:36.653151    9925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:36.653663    9925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:36.655147    9925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:36.655634    9925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:36.657120    9925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:36.653151    9925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:36.653663    9925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:36.655147    9925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:36.655634    9925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:36.657120    9925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:36.660253  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:36.660269  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:36.678368  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:36.678395  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:39.206672  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:39.218123  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:39.236299  687772 logs.go:282] 0 containers: []
	W1223 00:03:39.236322  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:39.236384  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:39.256168  687772 logs.go:282] 0 containers: []
	W1223 00:03:39.256194  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:39.256256  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:39.278907  687772 logs.go:282] 0 containers: []
	W1223 00:03:39.278934  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:39.278987  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:39.299685  687772 logs.go:282] 0 containers: []
	W1223 00:03:39.299712  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:39.299771  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:39.319824  687772 logs.go:282] 0 containers: []
	W1223 00:03:39.319847  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:39.319890  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:39.339314  687772 logs.go:282] 0 containers: []
	W1223 00:03:39.339340  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:39.339388  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:39.357097  687772 logs.go:282] 0 containers: []
	W1223 00:03:39.357122  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:39.357178  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:39.375484  687772 logs.go:282] 0 containers: []
	W1223 00:03:39.375506  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:39.375518  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:39.375528  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:39.422143  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:39.422171  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:39.442163  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:39.442190  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:39.499251  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:39.491681   10081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:39.492132   10081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:39.493822   10081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:39.494268   10081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:39.495815   10081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:39.491681   10081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:39.492132   10081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:39.493822   10081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:39.494268   10081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:39.495815   10081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:39.499300  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:39.499313  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:39.520555  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:39.520585  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:42.050334  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:42.062329  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:42.081392  687772 logs.go:282] 0 containers: []
	W1223 00:03:42.081414  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:42.081466  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:42.100032  687772 logs.go:282] 0 containers: []
	W1223 00:03:42.100060  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:42.100108  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:42.118667  687772 logs.go:282] 0 containers: []
	W1223 00:03:42.118701  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:42.118755  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:42.137260  687772 logs.go:282] 0 containers: []
	W1223 00:03:42.137280  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:42.137324  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:42.156202  687772 logs.go:282] 0 containers: []
	W1223 00:03:42.156223  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:42.156268  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:42.173781  687772 logs.go:282] 0 containers: []
	W1223 00:03:42.173805  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:42.173849  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:42.191802  687772 logs.go:282] 0 containers: []
	W1223 00:03:42.191823  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:42.191865  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:42.210403  687772 logs.go:282] 0 containers: []
	W1223 00:03:42.210428  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:42.210439  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:42.210451  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:42.257288  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:42.257324  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:42.279921  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:42.279950  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:42.335965  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:42.328933   10249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:42.329474   10249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:42.331040   10249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:42.331482   10249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:42.333076   10249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:42.328933   10249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:42.329474   10249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:42.331040   10249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:42.331482   10249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:42.333076   10249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:42.335989  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:42.336007  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:42.354691  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:42.354717  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:44.883238  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:44.894443  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:44.913117  687772 logs.go:282] 0 containers: []
	W1223 00:03:44.913141  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:44.913198  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:44.931401  687772 logs.go:282] 0 containers: []
	W1223 00:03:44.931426  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:44.931481  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:44.950195  687772 logs.go:282] 0 containers: []
	W1223 00:03:44.950223  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:44.950276  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:44.968485  687772 logs.go:282] 0 containers: []
	W1223 00:03:44.968511  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:44.968566  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:44.987148  687772 logs.go:282] 0 containers: []
	W1223 00:03:44.987171  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:44.987233  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:45.005624  687772 logs.go:282] 0 containers: []
	W1223 00:03:45.005646  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:45.005693  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:45.023699  687772 logs.go:282] 0 containers: []
	W1223 00:03:45.023724  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:45.023791  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:45.042874  687772 logs.go:282] 0 containers: []
	W1223 00:03:45.042892  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:45.042903  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:45.042913  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:45.091063  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:45.091090  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:45.111078  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:45.111104  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:45.165637  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:45.158773   10418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:45.159306   10418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:45.160855   10418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:45.161311   10418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:45.162829   10418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:45.158773   10418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:45.159306   10418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:45.160855   10418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:45.161311   10418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:45.162829   10418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:45.165664  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:45.165680  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:45.183805  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:45.183831  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:47.712691  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:47.724393  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:47.743118  687772 logs.go:282] 0 containers: []
	W1223 00:03:47.743145  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:47.743192  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:47.764020  687772 logs.go:282] 0 containers: []
	W1223 00:03:47.764047  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:47.764100  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:47.784950  687772 logs.go:282] 0 containers: []
	W1223 00:03:47.784979  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:47.785031  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:47.805130  687772 logs.go:282] 0 containers: []
	W1223 00:03:47.805153  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:47.805202  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:47.824818  687772 logs.go:282] 0 containers: []
	W1223 00:03:47.824840  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:47.824881  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:47.842122  687772 logs.go:282] 0 containers: []
	W1223 00:03:47.842142  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:47.842182  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:47.860107  687772 logs.go:282] 0 containers: []
	W1223 00:03:47.860126  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:47.860169  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:47.877957  687772 logs.go:282] 0 containers: []
	W1223 00:03:47.877981  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:47.877991  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:47.878003  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:47.913554  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:47.913583  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:47.959272  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:47.959301  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:47.979197  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:47.979224  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:48.034846  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:48.027499   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:48.028033   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:48.029566   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:48.030004   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:48.031523   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:48.027499   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:48.028033   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:48.029566   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:48.030004   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:48.031523   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:48.034864  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:48.034876  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:50.554653  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:50.565766  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:50.584506  687772 logs.go:282] 0 containers: []
	W1223 00:03:50.584527  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:50.584568  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:50.603087  687772 logs.go:282] 0 containers: []
	W1223 00:03:50.603112  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:50.603159  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:50.621694  687772 logs.go:282] 0 containers: []
	W1223 00:03:50.621718  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:50.621758  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:50.640855  687772 logs.go:282] 0 containers: []
	W1223 00:03:50.640882  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:50.640950  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:50.658573  687772 logs.go:282] 0 containers: []
	W1223 00:03:50.658615  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:50.658659  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:50.676703  687772 logs.go:282] 0 containers: []
	W1223 00:03:50.676725  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:50.676792  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:50.694997  687772 logs.go:282] 0 containers: []
	W1223 00:03:50.695020  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:50.695084  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:50.711361  687772 logs.go:282] 0 containers: []
	W1223 00:03:50.711382  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:50.711393  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:50.711405  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:50.739475  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:50.739500  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:50.789788  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:50.789828  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:50.810067  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:50.810096  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:50.864855  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:50.857771   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:50.858239   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:50.859923   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:50.860349   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:50.861907   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:50.857771   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:50.858239   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:50.859923   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:50.860349   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:50.861907   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:50.864881  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:50.864896  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:53.383457  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:53.394757  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:53.414248  687772 logs.go:282] 0 containers: []
	W1223 00:03:53.414277  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:53.414341  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:53.432950  687772 logs.go:282] 0 containers: []
	W1223 00:03:53.432970  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:53.433020  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:53.452058  687772 logs.go:282] 0 containers: []
	W1223 00:03:53.452081  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:53.452143  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:53.470670  687772 logs.go:282] 0 containers: []
	W1223 00:03:53.470698  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:53.470751  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:53.489416  687772 logs.go:282] 0 containers: []
	W1223 00:03:53.489443  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:53.489486  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:53.508963  687772 logs.go:282] 0 containers: []
	W1223 00:03:53.508995  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:53.509057  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:53.530683  687772 logs.go:282] 0 containers: []
	W1223 00:03:53.530710  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:53.530770  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:53.551545  687772 logs.go:282] 0 containers: []
	W1223 00:03:53.551577  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:53.551610  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:53.551627  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:53.570296  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:53.570324  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:53.598123  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:53.598154  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:53.646248  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:53.646280  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:53.666819  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:53.666844  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:53.722068  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:53.715109   10928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:53.715646   10928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:53.717149   10928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:53.717536   10928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:53.719006   10928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:53.715109   10928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:53.715646   10928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:53.717149   10928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:53.717536   10928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:53.719006   10928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:56.223706  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:56.235187  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:56.255491  687772 logs.go:282] 0 containers: []
	W1223 00:03:56.255511  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:56.255551  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:56.274455  687772 logs.go:282] 0 containers: []
	W1223 00:03:56.274479  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:56.274519  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:56.293621  687772 logs.go:282] 0 containers: []
	W1223 00:03:56.293648  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:56.293702  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:56.312485  687772 logs.go:282] 0 containers: []
	W1223 00:03:56.312511  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:56.312558  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:56.331239  687772 logs.go:282] 0 containers: []
	W1223 00:03:56.331266  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:56.331320  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:56.349793  687772 logs.go:282] 0 containers: []
	W1223 00:03:56.349813  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:56.349856  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:56.368378  687772 logs.go:282] 0 containers: []
	W1223 00:03:56.368397  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:56.368446  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:56.386706  687772 logs.go:282] 0 containers: []
	W1223 00:03:56.386730  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:56.386744  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:56.386759  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:56.435036  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:56.435067  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:56.456766  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:56.456793  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:56.515022  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:56.506534   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:56.507203   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:56.508885   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:56.509323   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:56.510920   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:56.506534   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:56.507203   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:56.508885   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:56.509323   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:56.510920   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:56.515044  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:56.515056  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:56.537382  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:56.537424  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:59.067413  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:59.078926  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:59.098458  687772 logs.go:282] 0 containers: []
	W1223 00:03:59.098490  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:59.098543  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:59.119074  687772 logs.go:282] 0 containers: []
	W1223 00:03:59.119100  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:59.119146  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:59.138014  687772 logs.go:282] 0 containers: []
	W1223 00:03:59.138036  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:59.138082  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:59.157367  687772 logs.go:282] 0 containers: []
	W1223 00:03:59.157390  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:59.157433  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:59.175923  687772 logs.go:282] 0 containers: []
	W1223 00:03:59.175950  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:59.176008  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:59.194211  687772 logs.go:282] 0 containers: []
	W1223 00:03:59.194243  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:59.194295  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:59.212980  687772 logs.go:282] 0 containers: []
	W1223 00:03:59.213004  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:59.213050  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:59.231233  687772 logs.go:282] 0 containers: []
	W1223 00:03:59.231255  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:59.231266  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:59.231277  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:59.260354  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:59.260377  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:59.307751  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:59.307784  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:59.327756  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:59.327782  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:59.382873  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:59.375811   11264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:59.376331   11264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:59.377895   11264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:59.378317   11264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:59.379902   11264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:59.375811   11264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:59.376331   11264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:59.377895   11264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:59.378317   11264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:59.379902   11264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:59.382895  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:59.382908  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:01.903304  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:01.914514  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:01.933300  687772 logs.go:282] 0 containers: []
	W1223 00:04:01.933328  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:01.933388  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:01.952153  687772 logs.go:282] 0 containers: []
	W1223 00:04:01.952181  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:01.952225  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:01.970903  687772 logs.go:282] 0 containers: []
	W1223 00:04:01.970933  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:01.970987  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:01.989493  687772 logs.go:282] 0 containers: []
	W1223 00:04:01.989513  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:01.989567  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:02.009114  687772 logs.go:282] 0 containers: []
	W1223 00:04:02.009141  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:02.009198  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:02.030277  687772 logs.go:282] 0 containers: []
	W1223 00:04:02.030310  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:02.030365  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:02.050466  687772 logs.go:282] 0 containers: []
	W1223 00:04:02.050492  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:02.050551  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:02.069917  687772 logs.go:282] 0 containers: []
	W1223 00:04:02.069941  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:02.069956  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:02.069970  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:02.115721  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:02.115750  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:02.135348  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:02.135373  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:02.190691  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:02.183688   11415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:02.184205   11415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:02.185799   11415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:02.186209   11415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:02.187682   11415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:02.183688   11415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:02.184205   11415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:02.185799   11415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:02.186209   11415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:02.187682   11415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:02.190712  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:02.190724  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:02.209097  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:02.209122  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:04.737357  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:04.748553  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:04.770341  687772 logs.go:282] 0 containers: []
	W1223 00:04:04.770369  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:04.770424  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:04.791137  687772 logs.go:282] 0 containers: []
	W1223 00:04:04.791165  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:04.791214  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:04.810520  687772 logs.go:282] 0 containers: []
	W1223 00:04:04.810541  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:04.810607  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:04.828972  687772 logs.go:282] 0 containers: []
	W1223 00:04:04.829000  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:04.829055  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:04.849074  687772 logs.go:282] 0 containers: []
	W1223 00:04:04.849096  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:04.849148  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:04.868041  687772 logs.go:282] 0 containers: []
	W1223 00:04:04.868063  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:04.868115  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:04.886481  687772 logs.go:282] 0 containers: []
	W1223 00:04:04.886504  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:04.886567  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:04.905235  687772 logs.go:282] 0 containers: []
	W1223 00:04:04.905262  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:04.905274  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:04.905285  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:04.953851  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:04.953880  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:04.973781  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:04.973806  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:05.031345  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:05.024020   11570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:05.024585   11570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:05.026291   11570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:05.026768   11570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:05.028137   11570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:05.024020   11570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:05.024585   11570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:05.026291   11570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:05.026768   11570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:05.028137   11570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:05.031368  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:05.031383  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:05.050812  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:05.050839  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:07.580204  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:07.592091  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:07.611238  687772 logs.go:282] 0 containers: []
	W1223 00:04:07.611267  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:07.611318  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:07.630713  687772 logs.go:282] 0 containers: []
	W1223 00:04:07.630736  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:07.630786  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:07.649511  687772 logs.go:282] 0 containers: []
	W1223 00:04:07.649541  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:07.649620  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:07.668236  687772 logs.go:282] 0 containers: []
	W1223 00:04:07.668264  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:07.668323  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:07.687077  687772 logs.go:282] 0 containers: []
	W1223 00:04:07.687101  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:07.687158  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:07.705952  687772 logs.go:282] 0 containers: []
	W1223 00:04:07.705982  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:07.706036  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:07.725156  687772 logs.go:282] 0 containers: []
	W1223 00:04:07.725178  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:07.725224  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:07.744024  687772 logs.go:282] 0 containers: []
	W1223 00:04:07.744049  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:07.744063  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:07.744079  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:07.797680  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:07.797721  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:07.819453  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:07.819481  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:07.875026  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:07.867909   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:07.868453   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:07.870037   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:07.870465   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:07.872022   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:07.867909   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:07.868453   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:07.870037   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:07.870465   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:07.872022   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:07.875046  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:07.875059  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:07.893942  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:07.893968  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:10.422234  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:10.433749  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:10.453027  687772 logs.go:282] 0 containers: []
	W1223 00:04:10.453049  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:10.453099  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:10.471766  687772 logs.go:282] 0 containers: []
	W1223 00:04:10.471789  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:10.471840  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:10.489960  687772 logs.go:282] 0 containers: []
	W1223 00:04:10.489981  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:10.490025  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:10.508537  687772 logs.go:282] 0 containers: []
	W1223 00:04:10.508558  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:10.508614  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:10.527336  687772 logs.go:282] 0 containers: []
	W1223 00:04:10.527362  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:10.527418  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:10.545995  687772 logs.go:282] 0 containers: []
	W1223 00:04:10.546019  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:10.546074  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:10.564167  687772 logs.go:282] 0 containers: []
	W1223 00:04:10.564196  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:10.564254  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:10.582919  687772 logs.go:282] 0 containers: []
	W1223 00:04:10.582947  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:10.582961  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:10.582974  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:10.630969  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:10.631004  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:10.651161  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:10.651197  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:10.709000  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:10.701750   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:10.702399   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:10.703967   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:10.704377   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:10.705928   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:10.701750   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:10.702399   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:10.703967   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:10.704377   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:10.705928   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:10.709026  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:10.709041  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:10.728175  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:10.728203  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:13.258812  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:13.271437  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:13.293437  687772 logs.go:282] 0 containers: []
	W1223 00:04:13.293468  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:13.293525  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:13.313483  687772 logs.go:282] 0 containers: []
	W1223 00:04:13.313508  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:13.313568  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:13.333612  687772 logs.go:282] 0 containers: []
	W1223 00:04:13.333643  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:13.333709  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:13.353086  687772 logs.go:282] 0 containers: []
	W1223 00:04:13.353111  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:13.353169  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:13.372208  687772 logs.go:282] 0 containers: []
	W1223 00:04:13.372230  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:13.372275  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:13.391431  687772 logs.go:282] 0 containers: []
	W1223 00:04:13.391457  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:13.391507  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:13.410402  687772 logs.go:282] 0 containers: []
	W1223 00:04:13.410434  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:13.410502  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:13.428653  687772 logs.go:282] 0 containers: []
	W1223 00:04:13.428675  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:13.428687  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:13.428709  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:13.474690  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:13.474729  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:13.495426  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:13.495457  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:13.550790  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:13.543422   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:13.544009   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:13.545544   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:13.546130   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:13.547692   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:13.543422   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:13.544009   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:13.545544   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:13.546130   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:13.547692   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:13.550810  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:13.550822  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:13.569370  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:13.569397  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:16.099133  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:16.110484  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:16.129712  687772 logs.go:282] 0 containers: []
	W1223 00:04:16.129743  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:16.129808  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:16.147785  687772 logs.go:282] 0 containers: []
	W1223 00:04:16.147808  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:16.147854  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:16.167259  687772 logs.go:282] 0 containers: []
	W1223 00:04:16.167284  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:16.167333  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:16.186151  687772 logs.go:282] 0 containers: []
	W1223 00:04:16.186178  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:16.186223  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:16.206074  687772 logs.go:282] 0 containers: []
	W1223 00:04:16.206099  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:16.206154  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:16.225296  687772 logs.go:282] 0 containers: []
	W1223 00:04:16.225319  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:16.225369  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:16.244091  687772 logs.go:282] 0 containers: []
	W1223 00:04:16.244115  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:16.244160  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:16.263620  687772 logs.go:282] 0 containers: []
	W1223 00:04:16.263643  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:16.263655  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:16.263667  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:16.323241  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:16.316239   12235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:16.316726   12235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:16.318256   12235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:16.318676   12235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:16.319891   12235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:16.316239   12235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:16.316726   12235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:16.318256   12235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:16.318676   12235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:16.319891   12235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:16.323265  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:16.323281  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:16.342320  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:16.342346  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:16.371156  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:16.371183  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:16.421158  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:16.421188  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:18.942795  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:18.954257  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:18.974190  687772 logs.go:282] 0 containers: []
	W1223 00:04:18.974217  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:18.974270  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:18.993178  687772 logs.go:282] 0 containers: []
	W1223 00:04:18.993200  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:18.993245  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:19.013377  687772 logs.go:282] 0 containers: []
	W1223 00:04:19.013405  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:19.013465  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:19.034917  687772 logs.go:282] 0 containers: []
	W1223 00:04:19.034941  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:19.034990  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:19.054247  687772 logs.go:282] 0 containers: []
	W1223 00:04:19.054271  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:19.054326  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:19.072206  687772 logs.go:282] 0 containers: []
	W1223 00:04:19.072235  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:19.072297  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:19.091855  687772 logs.go:282] 0 containers: []
	W1223 00:04:19.091882  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:19.091933  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:19.111067  687772 logs.go:282] 0 containers: []
	W1223 00:04:19.111100  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:19.111114  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:19.111127  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:19.161923  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:19.161955  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:19.182679  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:19.182708  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:19.239475  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:19.232458   12393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:19.233037   12393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:19.234582   12393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:19.234997   12393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:19.236569   12393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:19.232458   12393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:19.233037   12393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:19.234582   12393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:19.234997   12393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:19.236569   12393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:19.239503  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:19.239521  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:19.259046  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:19.259075  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:21.799246  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:21.810742  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:21.830826  687772 logs.go:282] 0 containers: []
	W1223 00:04:21.830852  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:21.830896  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:21.849427  687772 logs.go:282] 0 containers: []
	W1223 00:04:21.849455  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:21.849501  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:21.867823  687772 logs.go:282] 0 containers: []
	W1223 00:04:21.867847  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:21.867891  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:21.886431  687772 logs.go:282] 0 containers: []
	W1223 00:04:21.886452  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:21.886508  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:21.905079  687772 logs.go:282] 0 containers: []
	W1223 00:04:21.905103  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:21.905160  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:21.923344  687772 logs.go:282] 0 containers: []
	W1223 00:04:21.923365  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:21.923407  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:21.941945  687772 logs.go:282] 0 containers: []
	W1223 00:04:21.941966  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:21.942012  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:21.959749  687772 logs.go:282] 0 containers: []
	W1223 00:04:21.959773  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:21.959785  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:21.959795  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:21.979750  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:21.979776  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:22.008278  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:22.008301  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:22.059988  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:22.060022  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:22.080174  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:22.080201  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:22.135625  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:22.128550   12578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:22.129064   12578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:22.130551   12578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:22.130965   12578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:22.132436   12578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:22.128550   12578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:22.129064   12578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:22.130551   12578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:22.130965   12578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:22.132436   12578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:24.636526  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:24.647769  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:24.666800  687772 logs.go:282] 0 containers: []
	W1223 00:04:24.666823  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:24.666873  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:24.685078  687772 logs.go:282] 0 containers: []
	W1223 00:04:24.685100  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:24.685153  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:24.703219  687772 logs.go:282] 0 containers: []
	W1223 00:04:24.703238  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:24.703287  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:24.721619  687772 logs.go:282] 0 containers: []
	W1223 00:04:24.721647  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:24.721705  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:24.740548  687772 logs.go:282] 0 containers: []
	W1223 00:04:24.740570  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:24.740632  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:24.758544  687772 logs.go:282] 0 containers: []
	W1223 00:04:24.758568  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:24.758633  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:24.776285  687772 logs.go:282] 0 containers: []
	W1223 00:04:24.776317  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:24.776445  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:24.794360  687772 logs.go:282] 0 containers: []
	W1223 00:04:24.794386  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:24.794399  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:24.794413  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:24.840111  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:24.840142  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:24.860260  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:24.860286  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:24.915702  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:24.908230   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:24.908821   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:24.910346   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:24.910801   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:24.912322   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:24.908230   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:24.908821   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:24.910346   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:24.910801   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:24.912322   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:24.915723  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:24.915736  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:24.934368  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:24.934394  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:27.463653  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:27.474997  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:27.494098  687772 logs.go:282] 0 containers: []
	W1223 00:04:27.494127  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:27.494183  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:27.513771  687772 logs.go:282] 0 containers: []
	W1223 00:04:27.513799  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:27.513855  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:27.534688  687772 logs.go:282] 0 containers: []
	W1223 00:04:27.534720  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:27.534777  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:27.553043  687772 logs.go:282] 0 containers: []
	W1223 00:04:27.553065  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:27.553115  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:27.571979  687772 logs.go:282] 0 containers: []
	W1223 00:04:27.572005  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:27.572049  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:27.590357  687772 logs.go:282] 0 containers: []
	W1223 00:04:27.590376  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:27.590419  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:27.609465  687772 logs.go:282] 0 containers: []
	W1223 00:04:27.609490  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:27.609547  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:27.628214  687772 logs.go:282] 0 containers: []
	W1223 00:04:27.628238  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:27.628253  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:27.628267  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:27.646519  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:27.646545  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:27.674935  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:27.674958  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:27.721277  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:27.721306  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:27.741140  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:27.741165  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:27.796676  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:27.789709   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:27.790246   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:27.791825   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:27.792273   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:27.793752   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:27.789709   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:27.790246   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:27.791825   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:27.792273   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:27.793752   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:30.297779  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:30.308987  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:30.327806  687772 logs.go:282] 0 containers: []
	W1223 00:04:30.327827  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:30.327885  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:30.347142  687772 logs.go:282] 0 containers: []
	W1223 00:04:30.347165  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:30.347216  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:30.365629  687772 logs.go:282] 0 containers: []
	W1223 00:04:30.365656  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:30.365729  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:30.383470  687772 logs.go:282] 0 containers: []
	W1223 00:04:30.383496  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:30.383552  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:30.402127  687772 logs.go:282] 0 containers: []
	W1223 00:04:30.402152  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:30.402214  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:30.420681  687772 logs.go:282] 0 containers: []
	W1223 00:04:30.420706  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:30.420757  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:30.439453  687772 logs.go:282] 0 containers: []
	W1223 00:04:30.439475  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:30.439517  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:30.458669  687772 logs.go:282] 0 containers: []
	W1223 00:04:30.458691  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:30.458702  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:30.458713  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:30.505022  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:30.505050  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:30.528295  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:30.528323  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:30.585055  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:30.577823   13062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:30.578469   13062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:30.580017   13062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:30.580458   13062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:30.582038   13062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:30.577823   13062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:30.578469   13062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:30.580017   13062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:30.580458   13062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:30.582038   13062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:30.585076  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:30.585088  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:30.604200  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:30.604229  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:33.131779  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:33.143670  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:33.163179  687772 logs.go:282] 0 containers: []
	W1223 00:04:33.163200  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:33.163245  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:33.182970  687772 logs.go:282] 0 containers: []
	W1223 00:04:33.182992  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:33.183043  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:33.201569  687772 logs.go:282] 0 containers: []
	W1223 00:04:33.201609  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:33.201656  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:33.219907  687772 logs.go:282] 0 containers: []
	W1223 00:04:33.219931  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:33.219989  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:33.239604  687772 logs.go:282] 0 containers: []
	W1223 00:04:33.239630  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:33.239675  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:33.258182  687772 logs.go:282] 0 containers: []
	W1223 00:04:33.258211  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:33.258263  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:33.277606  687772 logs.go:282] 0 containers: []
	W1223 00:04:33.277632  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:33.277678  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:33.297258  687772 logs.go:282] 0 containers: []
	W1223 00:04:33.297283  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:33.297296  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:33.297312  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:33.344903  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:33.344932  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:33.364742  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:33.364768  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:33.420528  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:33.413527   13221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:33.414059   13221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:33.415546   13221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:33.416007   13221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:33.417495   13221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:33.413527   13221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:33.414059   13221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:33.415546   13221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:33.416007   13221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:33.417495   13221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:33.420549  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:33.420560  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:33.439384  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:33.439411  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:35.968903  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:35.980276  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:35.999444  687772 logs.go:282] 0 containers: []
	W1223 00:04:35.999474  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:35.999534  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:36.018792  687772 logs.go:282] 0 containers: []
	W1223 00:04:36.018819  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:36.018880  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:36.036956  687772 logs.go:282] 0 containers: []
	W1223 00:04:36.036985  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:36.037043  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:36.055239  687772 logs.go:282] 0 containers: []
	W1223 00:04:36.055265  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:36.055315  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:36.073241  687772 logs.go:282] 0 containers: []
	W1223 00:04:36.073272  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:36.073325  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:36.091575  687772 logs.go:282] 0 containers: []
	W1223 00:04:36.091613  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:36.091662  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:36.110369  687772 logs.go:282] 0 containers: []
	W1223 00:04:36.110396  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:36.110448  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:36.128481  687772 logs.go:282] 0 containers: []
	W1223 00:04:36.128505  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:36.128516  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:36.128526  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:36.176492  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:36.176526  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:36.196649  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:36.196675  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:36.253201  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:36.245327   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:36.245908   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:36.247446   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:36.247880   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:36.249674   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:36.245327   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:36.245908   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:36.247446   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:36.247880   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:36.249674   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:36.253224  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:36.253241  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:36.273351  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:36.273379  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:38.804411  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:38.815899  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:38.834644  687772 logs.go:282] 0 containers: []
	W1223 00:04:38.834668  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:38.834713  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:38.853892  687772 logs.go:282] 0 containers: []
	W1223 00:04:38.853919  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:38.853967  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:38.871484  687772 logs.go:282] 0 containers: []
	W1223 00:04:38.871505  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:38.871554  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:38.889803  687772 logs.go:282] 0 containers: []
	W1223 00:04:38.889828  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:38.889879  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:38.909558  687772 logs.go:282] 0 containers: []
	W1223 00:04:38.909586  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:38.909652  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:38.929528  687772 logs.go:282] 0 containers: []
	W1223 00:04:38.929553  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:38.929624  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:38.948153  687772 logs.go:282] 0 containers: []
	W1223 00:04:38.948181  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:38.948241  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:38.966657  687772 logs.go:282] 0 containers: []
	W1223 00:04:38.966679  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:38.966689  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:38.966711  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:38.994610  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:38.994637  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:39.040694  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:39.040722  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:39.060391  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:39.060417  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:39.116169  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:39.108908   13569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:39.109405   13569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:39.111037   13569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:39.111517   13569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:39.113022   13569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:39.108908   13569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:39.109405   13569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:39.111037   13569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:39.111517   13569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:39.113022   13569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:39.116189  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:39.116201  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:41.638009  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:41.650427  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:41.670214  687772 logs.go:282] 0 containers: []
	W1223 00:04:41.670241  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:41.670289  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:41.689539  687772 logs.go:282] 0 containers: []
	W1223 00:04:41.689568  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:41.689651  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:41.708449  687772 logs.go:282] 0 containers: []
	W1223 00:04:41.708472  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:41.708520  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:41.727897  687772 logs.go:282] 0 containers: []
	W1223 00:04:41.727918  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:41.727963  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:41.748169  687772 logs.go:282] 0 containers: []
	W1223 00:04:41.748200  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:41.748252  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:41.767148  687772 logs.go:282] 0 containers: []
	W1223 00:04:41.767172  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:41.767224  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:41.789562  687772 logs.go:282] 0 containers: []
	W1223 00:04:41.789589  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:41.789665  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:41.808259  687772 logs.go:282] 0 containers: []
	W1223 00:04:41.808281  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:41.808292  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:41.808304  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:41.827093  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:41.827120  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:41.854644  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:41.854671  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:41.901960  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:41.901995  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:41.921983  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:41.922011  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:41.978723  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:41.971457   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:41.971976   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:41.973486   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:41.973968   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:41.975679   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:41.971457   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:41.971976   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:41.973486   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:41.973968   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:41.975679   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:44.479583  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:44.491055  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:44.513749  687772 logs.go:282] 0 containers: []
	W1223 00:04:44.513779  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:44.513836  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:44.535619  687772 logs.go:282] 0 containers: []
	W1223 00:04:44.535648  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:44.535722  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:44.555441  687772 logs.go:282] 0 containers: []
	W1223 00:04:44.555464  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:44.555512  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:44.574828  687772 logs.go:282] 0 containers: []
	W1223 00:04:44.574851  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:44.574895  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:44.593270  687772 logs.go:282] 0 containers: []
	W1223 00:04:44.593293  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:44.593350  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:44.612157  687772 logs.go:282] 0 containers: []
	W1223 00:04:44.612182  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:44.612239  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:44.630342  687772 logs.go:282] 0 containers: []
	W1223 00:04:44.630366  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:44.630417  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:44.648864  687772 logs.go:282] 0 containers: []
	W1223 00:04:44.648893  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:44.648905  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:44.648917  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:44.698462  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:44.698494  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:44.718432  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:44.718463  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:44.777738  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:44.767938   13879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:44.768520   13879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:44.770129   13879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:44.770658   13879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:44.773585   13879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:44.767938   13879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:44.768520   13879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:44.770129   13879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:44.770658   13879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:44.773585   13879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:44.777764  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:44.777781  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:44.798488  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:44.798522  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:47.328787  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:47.340091  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:47.359764  687772 logs.go:282] 0 containers: []
	W1223 00:04:47.359786  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:47.359834  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:47.378531  687772 logs.go:282] 0 containers: []
	W1223 00:04:47.378557  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:47.378633  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:47.397279  687772 logs.go:282] 0 containers: []
	W1223 00:04:47.397303  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:47.397351  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:47.415379  687772 logs.go:282] 0 containers: []
	W1223 00:04:47.415404  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:47.415449  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:47.433342  687772 logs.go:282] 0 containers: []
	W1223 00:04:47.433363  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:47.433407  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:47.452134  687772 logs.go:282] 0 containers: []
	W1223 00:04:47.452153  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:47.452195  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:47.470492  687772 logs.go:282] 0 containers: []
	W1223 00:04:47.470514  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:47.470565  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:47.489435  687772 logs.go:282] 0 containers: []
	W1223 00:04:47.489462  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:47.489475  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:47.489490  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:47.543310  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:47.543341  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:47.563678  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:47.563716  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:47.618877  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:47.611492   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:47.612043   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:47.613658   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:47.614136   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:47.615686   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:47.611492   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:47.612043   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:47.613658   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:47.614136   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:47.615686   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:47.618902  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:47.618916  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:47.637117  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:47.637142  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:50.165288  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:50.176485  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:50.195504  687772 logs.go:282] 0 containers: []
	W1223 00:04:50.195530  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:50.195573  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:50.214411  687772 logs.go:282] 0 containers: []
	W1223 00:04:50.214435  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:50.214486  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:50.232050  687772 logs.go:282] 0 containers: []
	W1223 00:04:50.232073  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:50.232113  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:50.249723  687772 logs.go:282] 0 containers: []
	W1223 00:04:50.249747  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:50.249805  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:50.269197  687772 logs.go:282] 0 containers: []
	W1223 00:04:50.269220  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:50.269262  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:50.287018  687772 logs.go:282] 0 containers: []
	W1223 00:04:50.287042  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:50.287084  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:50.304852  687772 logs.go:282] 0 containers: []
	W1223 00:04:50.304876  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:50.304923  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:50.323126  687772 logs.go:282] 0 containers: []
	W1223 00:04:50.323150  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:50.323164  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:50.323177  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:50.371303  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:50.371328  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:50.391396  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:50.391419  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:50.446479  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:50.439351   14214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:50.440054   14214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:50.441636   14214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:50.442091   14214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:50.443655   14214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:50.439351   14214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:50.440054   14214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:50.441636   14214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:50.442091   14214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:50.443655   14214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:50.446503  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:50.446519  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:50.466869  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:50.466895  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:53.004783  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:53.016488  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:53.037102  687772 logs.go:282] 0 containers: []
	W1223 00:04:53.037130  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:53.037175  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:53.056487  687772 logs.go:282] 0 containers: []
	W1223 00:04:53.056509  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:53.056551  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:53.074919  687772 logs.go:282] 0 containers: []
	W1223 00:04:53.074938  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:53.074983  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:53.093142  687772 logs.go:282] 0 containers: []
	W1223 00:04:53.093163  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:53.093203  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:53.112007  687772 logs.go:282] 0 containers: []
	W1223 00:04:53.112030  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:53.112079  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:53.130737  687772 logs.go:282] 0 containers: []
	W1223 00:04:53.130759  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:53.130802  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:53.149980  687772 logs.go:282] 0 containers: []
	W1223 00:04:53.150009  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:53.150057  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:53.167468  687772 logs.go:282] 0 containers: []
	W1223 00:04:53.167493  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:53.167503  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:53.167513  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:53.195775  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:53.195800  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:53.243212  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:53.243238  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:53.263047  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:53.263073  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:53.319009  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:53.311761   14403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:53.312309   14403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:53.313970   14403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:53.314449   14403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:53.315934   14403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:53.311761   14403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:53.312309   14403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:53.313970   14403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:53.314449   14403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:53.315934   14403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:53.319029  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:53.319041  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:55.838963  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:55.850169  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:55.868811  687772 logs.go:282] 0 containers: []
	W1223 00:04:55.868833  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:55.868878  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:55.887281  687772 logs.go:282] 0 containers: []
	W1223 00:04:55.887309  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:55.887361  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:55.905343  687772 logs.go:282] 0 containers: []
	W1223 00:04:55.905372  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:55.905425  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:55.922787  687772 logs.go:282] 0 containers: []
	W1223 00:04:55.922811  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:55.922858  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:55.941063  687772 logs.go:282] 0 containers: []
	W1223 00:04:55.941090  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:55.941143  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:55.960388  687772 logs.go:282] 0 containers: []
	W1223 00:04:55.960413  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:55.960549  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:55.978787  687772 logs.go:282] 0 containers: []
	W1223 00:04:55.978810  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:55.978854  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:55.996489  687772 logs.go:282] 0 containers: []
	W1223 00:04:55.996516  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:55.996530  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:55.996542  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:56.048197  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:56.048229  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:56.068640  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:56.068668  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:56.124436  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:56.117357   14557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:56.117926   14557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:56.119480   14557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:56.119940   14557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:56.121489   14557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:56.117357   14557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:56.117926   14557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:56.119480   14557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:56.119940   14557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:56.121489   14557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:56.124461  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:56.124478  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:56.143079  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:56.143102  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:58.672032  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:58.683539  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:58.702739  687772 logs.go:282] 0 containers: []
	W1223 00:04:58.702762  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:58.702814  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:58.721434  687772 logs.go:282] 0 containers: []
	W1223 00:04:58.721465  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:58.721514  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:58.741740  687772 logs.go:282] 0 containers: []
	W1223 00:04:58.741768  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:58.741811  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:58.760960  687772 logs.go:282] 0 containers: []
	W1223 00:04:58.760982  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:58.761035  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:58.780979  687772 logs.go:282] 0 containers: []
	W1223 00:04:58.781001  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:58.781045  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:58.799417  687772 logs.go:282] 0 containers: []
	W1223 00:04:58.799453  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:58.799501  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:58.817985  687772 logs.go:282] 0 containers: []
	W1223 00:04:58.818007  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:58.818051  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:58.837633  687772 logs.go:282] 0 containers: []
	W1223 00:04:58.837659  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:58.837671  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:58.837683  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:58.856421  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:58.856448  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:58.883550  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:58.883574  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:58.932130  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:58.932158  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:58.953160  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:58.953189  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:59.009951  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:59.002105   14731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:59.002736   14731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:59.004318   14731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:59.004809   14731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:59.006430   14731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:59.002105   14731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:59.002736   14731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:59.004318   14731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:59.004809   14731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:59.006430   14731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:01.512529  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:01.523921  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:01.542499  687772 logs.go:282] 0 containers: []
	W1223 00:05:01.542525  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:01.542569  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:01.560824  687772 logs.go:282] 0 containers: []
	W1223 00:05:01.560850  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:01.560892  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:01.578994  687772 logs.go:282] 0 containers: []
	W1223 00:05:01.579017  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:01.579060  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:01.597267  687772 logs.go:282] 0 containers: []
	W1223 00:05:01.597293  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:01.597346  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:01.615860  687772 logs.go:282] 0 containers: []
	W1223 00:05:01.615880  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:01.615919  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:01.635022  687772 logs.go:282] 0 containers: []
	W1223 00:05:01.635045  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:01.635084  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:01.654257  687772 logs.go:282] 0 containers: []
	W1223 00:05:01.654282  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:01.654338  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:01.672470  687772 logs.go:282] 0 containers: []
	W1223 00:05:01.672492  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:01.672502  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:01.672513  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:01.720496  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:01.720525  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:01.740698  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:01.740724  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:01.800538  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:01.793437   14884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:01.794074   14884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:01.795170   14884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:01.795640   14884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:01.797188   14884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:01.793437   14884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:01.794074   14884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:01.795170   14884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:01.795640   14884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:01.797188   14884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:01.800562  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:01.800579  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:01.820265  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:01.820291  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:04.348938  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:04.360190  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:04.379095  687772 logs.go:282] 0 containers: []
	W1223 00:05:04.379124  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:04.379177  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:04.396991  687772 logs.go:282] 0 containers: []
	W1223 00:05:04.397012  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:04.397057  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:04.415658  687772 logs.go:282] 0 containers: []
	W1223 00:05:04.415682  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:04.415750  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:04.434023  687772 logs.go:282] 0 containers: []
	W1223 00:05:04.434049  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:04.434093  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:04.452721  687772 logs.go:282] 0 containers: []
	W1223 00:05:04.452744  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:04.452791  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:04.471221  687772 logs.go:282] 0 containers: []
	W1223 00:05:04.471247  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:04.471294  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:04.489656  687772 logs.go:282] 0 containers: []
	W1223 00:05:04.489685  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:04.489734  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:04.508637  687772 logs.go:282] 0 containers: []
	W1223 00:05:04.508669  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:04.508689  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:04.508702  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:04.526928  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:04.526953  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:04.553896  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:04.553923  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:04.602972  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:04.602999  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:04.622788  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:04.622812  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:04.678232  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:04.670559   15068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:04.671188   15068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:04.672874   15068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:04.673311   15068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:04.675032   15068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:04.670559   15068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:04.671188   15068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:04.672874   15068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:04.673311   15068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:04.675032   15068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:07.179923  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:07.191963  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:07.211239  687772 logs.go:282] 0 containers: []
	W1223 00:05:07.211263  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:07.211304  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:07.230281  687772 logs.go:282] 0 containers: []
	W1223 00:05:07.230302  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:07.230343  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:07.249365  687772 logs.go:282] 0 containers: []
	W1223 00:05:07.249391  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:07.249443  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:07.269410  687772 logs.go:282] 0 containers: []
	W1223 00:05:07.269431  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:07.269484  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:07.288681  687772 logs.go:282] 0 containers: []
	W1223 00:05:07.288711  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:07.288756  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:07.307722  687772 logs.go:282] 0 containers: []
	W1223 00:05:07.307742  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:07.307785  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:07.324479  687772 logs.go:282] 0 containers: []
	W1223 00:05:07.324503  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:07.324557  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:07.343010  687772 logs.go:282] 0 containers: []
	W1223 00:05:07.343030  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:07.343041  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:07.343056  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:07.370090  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:07.370116  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:07.416268  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:07.416294  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:07.436063  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:07.436088  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:07.492624  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:07.485207   15232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:07.485853   15232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:07.487473   15232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:07.488003   15232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:07.489566   15232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:07.485207   15232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:07.485853   15232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:07.487473   15232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:07.488003   15232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:07.489566   15232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:07.492650  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:07.492667  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:10.011735  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:10.025412  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:10.046816  687772 logs.go:282] 0 containers: []
	W1223 00:05:10.046848  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:10.046917  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:10.065664  687772 logs.go:282] 0 containers: []
	W1223 00:05:10.065693  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:10.065752  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:10.084486  687772 logs.go:282] 0 containers: []
	W1223 00:05:10.084512  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:10.084569  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:10.103489  687772 logs.go:282] 0 containers: []
	W1223 00:05:10.103510  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:10.103563  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:10.121383  687772 logs.go:282] 0 containers: []
	W1223 00:05:10.121413  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:10.121457  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:10.139817  687772 logs.go:282] 0 containers: []
	W1223 00:05:10.139840  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:10.139883  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:10.158123  687772 logs.go:282] 0 containers: []
	W1223 00:05:10.158142  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:10.158195  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:10.176690  687772 logs.go:282] 0 containers: []
	W1223 00:05:10.176714  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:10.176728  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:10.176743  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:10.221786  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:10.221818  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:10.241642  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:10.241670  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:10.306092  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:10.298846   15375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:10.299321   15375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:10.300928   15375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:10.301392   15375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:10.302935   15375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:10.298846   15375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:10.299321   15375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:10.300928   15375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:10.301392   15375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:10.302935   15375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:10.306110  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:10.306122  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:10.325227  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:10.325254  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:12.853199  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:12.864559  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:12.883528  687772 logs.go:282] 0 containers: []
	W1223 00:05:12.883553  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:12.883615  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:12.901914  687772 logs.go:282] 0 containers: []
	W1223 00:05:12.901946  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:12.902003  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:12.920676  687772 logs.go:282] 0 containers: []
	W1223 00:05:12.920703  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:12.920746  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:12.938812  687772 logs.go:282] 0 containers: []
	W1223 00:05:12.938840  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:12.938898  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:12.956564  687772 logs.go:282] 0 containers: []
	W1223 00:05:12.956588  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:12.956651  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:12.975030  687772 logs.go:282] 0 containers: []
	W1223 00:05:12.975056  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:12.975112  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:12.992748  687772 logs.go:282] 0 containers: []
	W1223 00:05:12.992770  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:12.992819  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:13.013710  687772 logs.go:282] 0 containers: []
	W1223 00:05:13.013733  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:13.013744  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:13.013756  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:13.044889  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:13.044920  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:13.090565  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:13.090611  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:13.110578  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:13.110614  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:13.166048  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:13.158806   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:13.159417   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:13.161011   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:13.161480   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:13.163044   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:13.158806   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:13.159417   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:13.161011   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:13.161480   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:13.163044   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:13.166066  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:13.166079  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:15.685941  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:15.697434  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:15.716560  687772 logs.go:282] 0 containers: []
	W1223 00:05:15.716607  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:15.716664  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:15.735775  687772 logs.go:282] 0 containers: []
	W1223 00:05:15.735799  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:15.735847  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:15.753974  687772 logs.go:282] 0 containers: []
	W1223 00:05:15.753996  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:15.754046  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:15.771763  687772 logs.go:282] 0 containers: []
	W1223 00:05:15.771788  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:15.771846  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:15.790222  687772 logs.go:282] 0 containers: []
	W1223 00:05:15.790249  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:15.790294  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:15.808671  687772 logs.go:282] 0 containers: []
	W1223 00:05:15.808691  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:15.808735  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:15.827295  687772 logs.go:282] 0 containers: []
	W1223 00:05:15.827324  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:15.827377  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:15.845637  687772 logs.go:282] 0 containers: []
	W1223 00:05:15.845658  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:15.845668  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:15.845679  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:15.892975  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:15.893004  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:15.912599  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:15.912626  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:15.967763  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:15.960925   15710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:15.961478   15710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:15.963006   15710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:15.963401   15710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:15.964895   15710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:15.960925   15710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:15.961478   15710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:15.963006   15710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:15.963401   15710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:15.964895   15710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:15.967788  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:15.967801  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:15.986603  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:15.986632  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:18.516732  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:18.529415  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:18.549048  687772 logs.go:282] 0 containers: []
	W1223 00:05:18.549069  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:18.549113  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:18.567672  687772 logs.go:282] 0 containers: []
	W1223 00:05:18.567705  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:18.567771  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:18.586513  687772 logs.go:282] 0 containers: []
	W1223 00:05:18.586538  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:18.586613  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:18.604518  687772 logs.go:282] 0 containers: []
	W1223 00:05:18.604538  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:18.604579  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:18.623446  687772 logs.go:282] 0 containers: []
	W1223 00:05:18.623467  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:18.623510  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:18.642213  687772 logs.go:282] 0 containers: []
	W1223 00:05:18.642230  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:18.642279  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:18.660501  687772 logs.go:282] 0 containers: []
	W1223 00:05:18.660521  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:18.660563  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:18.678846  687772 logs.go:282] 0 containers: []
	W1223 00:05:18.678869  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:18.678882  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:18.678893  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:18.727936  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:18.727965  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:18.749033  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:18.749059  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:18.804351  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:18.796992   15875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:18.797493   15875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:18.799074   15875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:18.799516   15875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:18.801045   15875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:18.796992   15875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:18.797493   15875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:18.799074   15875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:18.799516   15875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:18.801045   15875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:18.804386  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:18.804401  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:18.822650  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:18.822681  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:21.351938  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:21.363094  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:21.382091  687772 logs.go:282] 0 containers: []
	W1223 00:05:21.382123  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:21.382179  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:21.400790  687772 logs.go:282] 0 containers: []
	W1223 00:05:21.400813  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:21.400861  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:21.418989  687772 logs.go:282] 0 containers: []
	W1223 00:05:21.419014  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:21.419060  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:21.437814  687772 logs.go:282] 0 containers: []
	W1223 00:05:21.437839  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:21.437898  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:21.456967  687772 logs.go:282] 0 containers: []
	W1223 00:05:21.456991  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:21.457045  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:21.475541  687772 logs.go:282] 0 containers: []
	W1223 00:05:21.475566  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:21.475644  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:21.494493  687772 logs.go:282] 0 containers: []
	W1223 00:05:21.494518  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:21.494576  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:21.513952  687772 logs.go:282] 0 containers: []
	W1223 00:05:21.513979  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:21.513990  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:21.514001  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:21.563253  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:21.563283  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:21.583663  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:21.583693  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:21.638754  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:21.631703   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:21.632235   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:21.633835   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:21.634263   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:21.635800   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:21.631703   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:21.632235   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:21.633835   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:21.634263   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:21.635800   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:21.638774  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:21.638786  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:21.657674  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:21.657704  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:24.188905  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:24.200277  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:24.220108  687772 logs.go:282] 0 containers: []
	W1223 00:05:24.220133  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:24.220188  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:24.240286  687772 logs.go:282] 0 containers: []
	W1223 00:05:24.240307  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:24.240351  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:24.260644  687772 logs.go:282] 0 containers: []
	W1223 00:05:24.260670  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:24.260724  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:24.282918  687772 logs.go:282] 0 containers: []
	W1223 00:05:24.282943  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:24.282990  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:24.302929  687772 logs.go:282] 0 containers: []
	W1223 00:05:24.302956  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:24.303013  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:24.322124  687772 logs.go:282] 0 containers: []
	W1223 00:05:24.322145  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:24.322196  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:24.340965  687772 logs.go:282] 0 containers: []
	W1223 00:05:24.340993  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:24.341050  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:24.360121  687772 logs.go:282] 0 containers: []
	W1223 00:05:24.360148  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:24.360162  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:24.360177  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:24.406776  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:24.406809  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:24.428882  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:24.428909  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:24.484257  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:24.477184   16205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:24.477734   16205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:24.479261   16205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:24.479752   16205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:24.481241   16205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:24.477184   16205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:24.477734   16205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:24.479261   16205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:24.479752   16205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:24.481241   16205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:24.484286  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:24.484304  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:24.504724  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:24.504752  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:27.038561  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:27.050259  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:27.069265  687772 logs.go:282] 0 containers: []
	W1223 00:05:27.069288  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:27.069333  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:27.088081  687772 logs.go:282] 0 containers: []
	W1223 00:05:27.088108  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:27.088171  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:27.107172  687772 logs.go:282] 0 containers: []
	W1223 00:05:27.107198  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:27.107246  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:27.125773  687772 logs.go:282] 0 containers: []
	W1223 00:05:27.125804  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:27.125862  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:27.144259  687772 logs.go:282] 0 containers: []
	W1223 00:05:27.144282  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:27.144339  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:27.163197  687772 logs.go:282] 0 containers: []
	W1223 00:05:27.163217  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:27.163263  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:27.181942  687772 logs.go:282] 0 containers: []
	W1223 00:05:27.181971  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:27.182030  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:27.199936  687772 logs.go:282] 0 containers: []
	W1223 00:05:27.199964  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:27.199980  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:27.199996  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:27.218431  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:27.218456  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:27.246756  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:27.246783  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:27.297557  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:27.297603  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:27.318177  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:27.318205  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:27.374968  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:27.367760   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:27.368359   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:27.369972   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:27.370370   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:27.371924   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:27.367760   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:27.368359   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:27.369972   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:27.370370   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:27.371924   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:29.875712  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:29.887100  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:29.906809  687772 logs.go:282] 0 containers: []
	W1223 00:05:29.906834  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:29.906892  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:29.926388  687772 logs.go:282] 0 containers: []
	W1223 00:05:29.926414  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:29.926467  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:29.946220  687772 logs.go:282] 0 containers: []
	W1223 00:05:29.946248  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:29.946302  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:29.967102  687772 logs.go:282] 0 containers: []
	W1223 00:05:29.967131  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:29.967188  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:29.986540  687772 logs.go:282] 0 containers: []
	W1223 00:05:29.986564  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:29.986631  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:30.004809  687772 logs.go:282] 0 containers: []
	W1223 00:05:30.004835  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:30.004881  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:30.023625  687772 logs.go:282] 0 containers: []
	W1223 00:05:30.023655  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:30.023711  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:30.042067  687772 logs.go:282] 0 containers: []
	W1223 00:05:30.042089  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:30.042100  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:30.042120  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:30.061885  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:30.061913  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:30.090401  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:30.090432  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:30.138962  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:30.138993  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:30.159224  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:30.159250  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:30.216295  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:30.208699   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:30.209372   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:30.211074   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:30.211516   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:30.213098   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:30.208699   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:30.209372   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:30.211074   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:30.211516   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:30.213098   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:32.716974  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:32.728432  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:32.748217  687772 logs.go:282] 0 containers: []
	W1223 00:05:32.748245  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:32.748292  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:32.767866  687772 logs.go:282] 0 containers: []
	W1223 00:05:32.767887  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:32.767935  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:32.788690  687772 logs.go:282] 0 containers: []
	W1223 00:05:32.788723  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:32.788782  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:32.808366  687772 logs.go:282] 0 containers: []
	W1223 00:05:32.808397  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:32.808460  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:32.827631  687772 logs.go:282] 0 containers: []
	W1223 00:05:32.827655  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:32.827714  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:32.846429  687772 logs.go:282] 0 containers: []
	W1223 00:05:32.846456  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:32.846511  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:32.865177  687772 logs.go:282] 0 containers: []
	W1223 00:05:32.865202  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:32.865258  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:32.885235  687772 logs.go:282] 0 containers: []
	W1223 00:05:32.885258  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:32.885268  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:32.885280  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:32.905218  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:32.905245  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:32.960860  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:32.953652   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:32.954228   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:32.955802   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:32.956269   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:32.957894   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:32.953652   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:32.954228   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:32.955802   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:32.956269   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:32.957894   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:32.960885  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:32.960905  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:32.979917  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:32.979943  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:33.008187  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:33.008218  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:35.555359  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:35.566888  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:35.586562  687772 logs.go:282] 0 containers: []
	W1223 00:05:35.586588  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:35.586657  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:35.605495  687772 logs.go:282] 0 containers: []
	W1223 00:05:35.605522  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:35.605579  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:35.624671  687772 logs.go:282] 0 containers: []
	W1223 00:05:35.624700  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:35.624760  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:35.643198  687772 logs.go:282] 0 containers: []
	W1223 00:05:35.643222  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:35.643278  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:35.662223  687772 logs.go:282] 0 containers: []
	W1223 00:05:35.662245  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:35.662290  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:35.681991  687772 logs.go:282] 0 containers: []
	W1223 00:05:35.682016  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:35.682071  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:35.700985  687772 logs.go:282] 0 containers: []
	W1223 00:05:35.701009  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:35.701062  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:35.719976  687772 logs.go:282] 0 containers: []
	W1223 00:05:35.720000  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:35.720015  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:35.720029  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:35.767694  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:35.767728  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:35.792896  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:35.792935  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:35.849448  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:35.842971   16872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:35.843476   16872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:35.845024   16872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:35.845404   16872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:35.846511   16872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:35.842971   16872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:35.843476   16872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:35.845024   16872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:35.845404   16872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:35.846511   16872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:35.849470  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:35.849491  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:35.868248  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:35.868274  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:38.397175  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:38.408856  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:38.428054  687772 logs.go:282] 0 containers: []
	W1223 00:05:38.428085  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:38.428141  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:38.447350  687772 logs.go:282] 0 containers: []
	W1223 00:05:38.447376  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:38.447428  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:38.466426  687772 logs.go:282] 0 containers: []
	W1223 00:05:38.466455  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:38.466512  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:38.486074  687772 logs.go:282] 0 containers: []
	W1223 00:05:38.486104  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:38.486173  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:38.505584  687772 logs.go:282] 0 containers: []
	W1223 00:05:38.505626  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:38.505709  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:38.527387  687772 logs.go:282] 0 containers: []
	W1223 00:05:38.527416  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:38.527473  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:38.547928  687772 logs.go:282] 0 containers: []
	W1223 00:05:38.547955  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:38.548015  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:38.568237  687772 logs.go:282] 0 containers: []
	W1223 00:05:38.568262  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:38.568274  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:38.568285  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:38.616522  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:38.616555  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:38.638676  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:38.638707  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:38.694984  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:38.687773   17030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:38.688337   17030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:38.689839   17030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:38.690288   17030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:38.691876   17030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:38.687773   17030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:38.688337   17030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:38.689839   17030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:38.690288   17030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:38.691876   17030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:38.695006  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:38.695019  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:38.713940  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:38.713969  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:41.244859  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:41.256283  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:41.275201  687772 logs.go:282] 0 containers: []
	W1223 00:05:41.275233  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:41.275280  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:41.295272  687772 logs.go:282] 0 containers: []
	W1223 00:05:41.295299  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:41.295353  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:41.313039  687772 logs.go:282] 0 containers: []
	W1223 00:05:41.313069  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:41.313135  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:41.331394  687772 logs.go:282] 0 containers: []
	W1223 00:05:41.331418  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:41.331491  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:41.350556  687772 logs.go:282] 0 containers: []
	W1223 00:05:41.350583  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:41.350650  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:41.369215  687772 logs.go:282] 0 containers: []
	W1223 00:05:41.369242  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:41.369290  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:41.387799  687772 logs.go:282] 0 containers: []
	W1223 00:05:41.387826  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:41.387877  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:41.406760  687772 logs.go:282] 0 containers: []
	W1223 00:05:41.406785  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:41.406799  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:41.406813  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:41.453518  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:41.453548  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:41.473671  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:41.473700  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:41.531098  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:41.523365   17203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:41.523912   17203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:41.525536   17203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:41.526073   17203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:41.527560   17203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:41.523365   17203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:41.523912   17203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:41.525536   17203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:41.526073   17203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:41.527560   17203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:41.531124  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:41.531139  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:41.551968  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:41.551997  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:44.081115  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:44.092382  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:44.111299  687772 logs.go:282] 0 containers: []
	W1223 00:05:44.111326  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:44.111381  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:44.130168  687772 logs.go:282] 0 containers: []
	W1223 00:05:44.130196  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:44.130250  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:44.149028  687772 logs.go:282] 0 containers: []
	W1223 00:05:44.149052  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:44.149109  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:44.167326  687772 logs.go:282] 0 containers: []
	W1223 00:05:44.167346  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:44.167388  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:44.185875  687772 logs.go:282] 0 containers: []
	W1223 00:05:44.185898  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:44.185949  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:44.205297  687772 logs.go:282] 0 containers: []
	W1223 00:05:44.205320  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:44.205370  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:44.224561  687772 logs.go:282] 0 containers: []
	W1223 00:05:44.224608  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:44.224661  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:44.242760  687772 logs.go:282] 0 containers: []
	W1223 00:05:44.242782  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:44.242795  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:44.242808  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:44.290363  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:44.290399  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:44.310780  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:44.310806  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:44.367913  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:44.360501   17368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:44.361124   17368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:44.362755   17368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:44.363237   17368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:44.364761   17368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:44.360501   17368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:44.361124   17368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:44.362755   17368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:44.363237   17368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:44.364761   17368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:44.367931  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:44.367945  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:44.387052  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:44.387080  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:46.916305  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:46.927926  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:46.946856  687772 logs.go:282] 0 containers: []
	W1223 00:05:46.946882  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:46.946941  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:46.965651  687772 logs.go:282] 0 containers: []
	W1223 00:05:46.965674  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:46.965720  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:46.984835  687772 logs.go:282] 0 containers: []
	W1223 00:05:46.984863  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:46.984920  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:47.005005  687772 logs.go:282] 0 containers: []
	W1223 00:05:47.005033  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:47.005095  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:47.026916  687772 logs.go:282] 0 containers: []
	W1223 00:05:47.026948  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:47.026996  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:47.047971  687772 logs.go:282] 0 containers: []
	W1223 00:05:47.048003  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:47.048064  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:47.067344  687772 logs.go:282] 0 containers: []
	W1223 00:05:47.067372  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:47.067424  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:47.087055  687772 logs.go:282] 0 containers: []
	W1223 00:05:47.087079  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:47.087093  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:47.087107  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:47.134052  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:47.134085  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:47.154446  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:47.154479  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:47.210710  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:47.203541   17534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:47.204152   17534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:47.205769   17534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:47.206170   17534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:47.207683   17534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:47.203541   17534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:47.204152   17534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:47.205769   17534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:47.206170   17534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:47.207683   17534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:47.210734  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:47.210746  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:47.230988  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:47.231017  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:49.759465  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:49.771325  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:49.791131  687772 logs.go:282] 0 containers: []
	W1223 00:05:49.791160  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:49.791219  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:49.810792  687772 logs.go:282] 0 containers: []
	W1223 00:05:49.810814  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:49.810859  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:49.829432  687772 logs.go:282] 0 containers: []
	W1223 00:05:49.829454  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:49.829499  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:49.847527  687772 logs.go:282] 0 containers: []
	W1223 00:05:49.847548  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:49.847603  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:49.866252  687772 logs.go:282] 0 containers: []
	W1223 00:05:49.866275  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:49.866315  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:49.885934  687772 logs.go:282] 0 containers: []
	W1223 00:05:49.885955  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:49.885996  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:49.903668  687772 logs.go:282] 0 containers: []
	W1223 00:05:49.903690  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:49.903733  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:49.923276  687772 logs.go:282] 0 containers: []
	W1223 00:05:49.923298  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:49.923309  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:49.923320  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:49.968185  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:49.968217  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:49.988993  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:49.989021  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:50.052060  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:50.045040   17695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:50.045626   17695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:50.047194   17695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:50.047655   17695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:50.049139   17695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:50.045040   17695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:50.045626   17695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:50.047194   17695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:50.047655   17695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:50.049139   17695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:50.052083  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:50.052100  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:50.070860  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:50.070885  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:52.599679  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:52.611289  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:52.629699  687772 logs.go:282] 0 containers: []
	W1223 00:05:52.629724  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:52.629782  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:52.648660  687772 logs.go:282] 0 containers: []
	W1223 00:05:52.648689  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:52.648740  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:52.667204  687772 logs.go:282] 0 containers: []
	W1223 00:05:52.667232  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:52.667287  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:52.685635  687772 logs.go:282] 0 containers: []
	W1223 00:05:52.685667  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:52.685718  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:52.703669  687772 logs.go:282] 0 containers: []
	W1223 00:05:52.703692  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:52.703742  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:52.721467  687772 logs.go:282] 0 containers: []
	W1223 00:05:52.721495  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:52.721553  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:52.739858  687772 logs.go:282] 0 containers: []
	W1223 00:05:52.739885  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:52.739930  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:52.759123  687772 logs.go:282] 0 containers: []
	W1223 00:05:52.759151  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:52.759165  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:52.759178  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:52.812520  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:52.812552  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:52.832551  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:52.832578  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:52.887680  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:52.880327   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:52.880960   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:52.882578   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:52.883148   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:52.884700   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:52.880327   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:52.880960   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:52.882578   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:52.883148   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:52.884700   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:52.887700  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:52.887719  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:52.906246  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:52.906276  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:55.444344  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:55.455763  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:55.475305  687772 logs.go:282] 0 containers: []
	W1223 00:05:55.475332  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:55.475389  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:55.494094  687772 logs.go:282] 0 containers: []
	W1223 00:05:55.494117  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:55.494164  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:55.511874  687772 logs.go:282] 0 containers: []
	W1223 00:05:55.511896  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:55.511942  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:55.530088  687772 logs.go:282] 0 containers: []
	W1223 00:05:55.530113  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:55.530159  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:55.548749  687772 logs.go:282] 0 containers: []
	W1223 00:05:55.548778  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:55.548828  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:55.567179  687772 logs.go:282] 0 containers: []
	W1223 00:05:55.567204  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:55.567269  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:55.586315  687772 logs.go:282] 0 containers: []
	W1223 00:05:55.586343  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:55.586395  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:55.605282  687772 logs.go:282] 0 containers: []
	W1223 00:05:55.605303  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:55.605314  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:55.605327  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:55.624085  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:55.624113  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:55.652038  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:55.652065  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:55.699247  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:55.699274  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:55.719031  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:55.719058  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:55.777078  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:55.769272   18055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:55.769828   18055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:55.771491   18055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:55.771926   18055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:55.773469   18055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:55.769272   18055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:55.769828   18055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:55.771491   18055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:55.771926   18055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:55.773469   18055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:58.278708  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:58.291024  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:58.310944  687772 logs.go:282] 0 containers: []
	W1223 00:05:58.310971  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:58.311027  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:58.329419  687772 logs.go:282] 0 containers: []
	W1223 00:05:58.329443  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:58.329499  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:58.346556  687772 logs.go:282] 0 containers: []
	W1223 00:05:58.346579  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:58.346653  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:58.364565  687772 logs.go:282] 0 containers: []
	W1223 00:05:58.364601  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:58.364653  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:58.383020  687772 logs.go:282] 0 containers: []
	W1223 00:05:58.383043  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:58.383089  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:58.401354  687772 logs.go:282] 0 containers: []
	W1223 00:05:58.401381  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:58.401440  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:58.419356  687772 logs.go:282] 0 containers: []
	W1223 00:05:58.419377  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:58.419426  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:58.438428  687772 logs.go:282] 0 containers: []
	W1223 00:05:58.438449  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:58.438461  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:58.438477  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:58.458325  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:58.458353  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:58.513127  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:58.506001   18204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:58.506523   18204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:58.508086   18204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:58.508549   18204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:58.510071   18204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:58.506001   18204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:58.506523   18204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:58.508086   18204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:58.508549   18204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:58.510071   18204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:58.513156  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:58.513173  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:58.532159  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:58.532183  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:58.559409  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:58.559433  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:01.105933  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:01.117378  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:01.136395  687772 logs.go:282] 0 containers: []
	W1223 00:06:01.136418  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:01.136463  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:01.155037  687772 logs.go:282] 0 containers: []
	W1223 00:06:01.155063  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:01.155111  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:01.173939  687772 logs.go:282] 0 containers: []
	W1223 00:06:01.173960  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:01.174004  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:01.193250  687772 logs.go:282] 0 containers: []
	W1223 00:06:01.193271  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:01.193312  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:01.210927  687772 logs.go:282] 0 containers: []
	W1223 00:06:01.210948  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:01.210990  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:01.229293  687772 logs.go:282] 0 containers: []
	W1223 00:06:01.229319  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:01.229367  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:01.247971  687772 logs.go:282] 0 containers: []
	W1223 00:06:01.247997  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:01.248059  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:01.267642  687772 logs.go:282] 0 containers: []
	W1223 00:06:01.267667  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:01.267688  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:01.267718  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:01.290552  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:01.290581  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:01.346096  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:01.339164   18374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:01.339647   18374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:01.341218   18374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:01.341667   18374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:01.343153   18374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:01.339164   18374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:01.339647   18374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:01.341218   18374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:01.341667   18374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:01.343153   18374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:01.346115  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:01.346127  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:01.364490  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:01.364516  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:01.391895  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:01.391918  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:03.938979  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:03.950393  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:03.969334  687772 logs.go:282] 0 containers: []
	W1223 00:06:03.969364  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:03.969448  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:03.988183  687772 logs.go:282] 0 containers: []
	W1223 00:06:03.988205  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:03.988252  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:04.007742  687772 logs.go:282] 0 containers: []
	W1223 00:06:04.007767  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:04.007821  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:04.027502  687772 logs.go:282] 0 containers: []
	W1223 00:06:04.027528  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:04.027582  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:04.048194  687772 logs.go:282] 0 containers: []
	W1223 00:06:04.048222  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:04.048286  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:04.067020  687772 logs.go:282] 0 containers: []
	W1223 00:06:04.067044  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:04.067096  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:04.085747  687772 logs.go:282] 0 containers: []
	W1223 00:06:04.085776  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:04.085829  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:04.103906  687772 logs.go:282] 0 containers: []
	W1223 00:06:04.103936  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:04.103950  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:04.103963  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:04.131404  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:04.131427  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:04.178862  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:04.178893  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:04.198797  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:04.198823  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:04.255150  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:04.247324   18547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:04.247911   18547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:04.249519   18547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:04.249945   18547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:04.251469   18547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:04.247324   18547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:04.247911   18547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:04.249519   18547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:04.249945   18547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:04.251469   18547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:04.255174  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:04.255190  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:06.777149  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:06.788444  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:06.807818  687772 logs.go:282] 0 containers: []
	W1223 00:06:06.807839  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:06.807881  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:06.827018  687772 logs.go:282] 0 containers: []
	W1223 00:06:06.827044  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:06.827092  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:06.845320  687772 logs.go:282] 0 containers: []
	W1223 00:06:06.845342  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:06.845395  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:06.862837  687772 logs.go:282] 0 containers: []
	W1223 00:06:06.862856  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:06.862907  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:06.880629  687772 logs.go:282] 0 containers: []
	W1223 00:06:06.880649  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:06.880690  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:06.898665  687772 logs.go:282] 0 containers: []
	W1223 00:06:06.898694  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:06.898762  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:06.916571  687772 logs.go:282] 0 containers: []
	W1223 00:06:06.916606  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:06.916662  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:06.934190  687772 logs.go:282] 0 containers: []
	W1223 00:06:06.934213  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:06.934228  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:06.934245  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:06.961869  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:06.961895  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:07.008426  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:07.008460  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:07.033602  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:07.033641  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:07.089432  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:07.082227   18715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:07.082831   18715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:07.084421   18715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:07.084867   18715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:07.086345   18715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:07.082227   18715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:07.082831   18715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:07.084421   18715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:07.084867   18715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:07.086345   18715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:07.089452  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:07.089463  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:09.608089  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:09.619510  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:09.638402  687772 logs.go:282] 0 containers: []
	W1223 00:06:09.638426  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:09.638473  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:09.657218  687772 logs.go:282] 0 containers: []
	W1223 00:06:09.657247  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:09.657292  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:09.675838  687772 logs.go:282] 0 containers: []
	W1223 00:06:09.675871  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:09.675935  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:09.694913  687772 logs.go:282] 0 containers: []
	W1223 00:06:09.694939  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:09.694992  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:09.714024  687772 logs.go:282] 0 containers: []
	W1223 00:06:09.714046  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:09.714097  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:09.733120  687772 logs.go:282] 0 containers: []
	W1223 00:06:09.733142  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:09.733188  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:09.752081  687772 logs.go:282] 0 containers: []
	W1223 00:06:09.752104  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:09.752148  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:09.770630  687772 logs.go:282] 0 containers: []
	W1223 00:06:09.770661  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:09.770676  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:09.770700  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:09.818931  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:09.818967  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:09.839282  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:09.839309  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:09.895206  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:09.888285   18867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:09.888810   18867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:09.890309   18867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:09.890779   18867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:09.891942   18867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:09.888285   18867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:09.888810   18867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:09.890309   18867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:09.890779   18867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:09.891942   18867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:09.895234  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:09.895247  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:09.913965  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:09.913994  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:12.442178  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:12.453355  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:12.472243  687772 logs.go:282] 0 containers: []
	W1223 00:06:12.472267  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:12.472312  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:12.491113  687772 logs.go:282] 0 containers: []
	W1223 00:06:12.491136  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:12.491192  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:12.511291  687772 logs.go:282] 0 containers: []
	W1223 00:06:12.511317  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:12.511376  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:12.532112  687772 logs.go:282] 0 containers: []
	W1223 00:06:12.532141  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:12.532196  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:12.551226  687772 logs.go:282] 0 containers: []
	W1223 00:06:12.551250  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:12.551293  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:12.569426  687772 logs.go:282] 0 containers: []
	W1223 00:06:12.569449  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:12.569504  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:12.588494  687772 logs.go:282] 0 containers: []
	W1223 00:06:12.588520  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:12.588569  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:12.606610  687772 logs.go:282] 0 containers: []
	W1223 00:06:12.606644  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:12.606657  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:12.606674  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:12.634113  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:12.634143  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:12.681112  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:12.681140  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:12.700711  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:12.700736  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:12.757239  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:12.749780   19051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:12.750485   19051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:12.752070   19051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:12.752530   19051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:12.754079   19051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:12.749780   19051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:12.750485   19051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:12.752070   19051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:12.752530   19051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:12.754079   19051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:12.757259  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:12.757273  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:15.278124  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:15.290283  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:15.309406  687772 logs.go:282] 0 containers: []
	W1223 00:06:15.309433  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:15.309481  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:15.328093  687772 logs.go:282] 0 containers: []
	W1223 00:06:15.328119  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:15.328173  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:15.346922  687772 logs.go:282] 0 containers: []
	W1223 00:06:15.346949  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:15.347006  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:15.364932  687772 logs.go:282] 0 containers: []
	W1223 00:06:15.364960  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:15.365013  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:15.383120  687772 logs.go:282] 0 containers: []
	W1223 00:06:15.383144  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:15.383188  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:15.401332  687772 logs.go:282] 0 containers: []
	W1223 00:06:15.401355  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:15.401404  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:15.419961  687772 logs.go:282] 0 containers: []
	W1223 00:06:15.419986  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:15.420037  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:15.438746  687772 logs.go:282] 0 containers: []
	W1223 00:06:15.438769  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:15.438780  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:15.438793  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:15.486016  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:15.486044  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:15.506911  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:15.506939  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:15.566808  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:15.559320   19201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:15.559937   19201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:15.561686   19201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:15.562103   19201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:15.563650   19201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:15.559320   19201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:15.559937   19201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:15.561686   19201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:15.562103   19201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:15.563650   19201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:15.566826  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:15.566836  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:15.586013  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:15.586040  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:18.115753  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:18.127221  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:18.146018  687772 logs.go:282] 0 containers: []
	W1223 00:06:18.146048  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:18.146094  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:18.165274  687772 logs.go:282] 0 containers: []
	W1223 00:06:18.165294  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:18.165337  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:18.183880  687772 logs.go:282] 0 containers: []
	W1223 00:06:18.183904  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:18.183947  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:18.202061  687772 logs.go:282] 0 containers: []
	W1223 00:06:18.202082  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:18.202130  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:18.219858  687772 logs.go:282] 0 containers: []
	W1223 00:06:18.219892  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:18.219945  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:18.238966  687772 logs.go:282] 0 containers: []
	W1223 00:06:18.238987  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:18.239032  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:18.260921  687772 logs.go:282] 0 containers: []
	W1223 00:06:18.260949  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:18.260997  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:18.280705  687772 logs.go:282] 0 containers: []
	W1223 00:06:18.280735  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:18.280750  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:18.280764  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:18.299732  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:18.299756  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:18.327603  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:18.327631  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:18.375722  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:18.375749  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:18.397572  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:18.397611  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:18.454135  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:18.447039   19376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:18.447614   19376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:18.449142   19376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:18.449559   19376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:18.451077   19376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:18.447039   19376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:18.447614   19376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:18.449142   19376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:18.449559   19376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:18.451077   19376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:20.955833  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:20.967309  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:20.986237  687772 logs.go:282] 0 containers: []
	W1223 00:06:20.986258  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:20.986301  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:21.004350  687772 logs.go:282] 0 containers: []
	W1223 00:06:21.004377  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:21.004434  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:21.022893  687772 logs.go:282] 0 containers: []
	W1223 00:06:21.022919  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:21.022974  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:21.042421  687772 logs.go:282] 0 containers: []
	W1223 00:06:21.042441  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:21.042484  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:21.061267  687772 logs.go:282] 0 containers: []
	W1223 00:06:21.061293  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:21.061355  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:21.079988  687772 logs.go:282] 0 containers: []
	W1223 00:06:21.080011  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:21.080064  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:21.098196  687772 logs.go:282] 0 containers: []
	W1223 00:06:21.098225  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:21.098279  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:21.117158  687772 logs.go:282] 0 containers: []
	W1223 00:06:21.117180  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:21.117191  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:21.117202  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:21.146189  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:21.146215  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:21.192645  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:21.192677  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:21.212689  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:21.212716  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:21.269438  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:21.261783   19545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:21.262320   19545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:21.263990   19545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:21.264498   19545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:21.266022   19545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:21.261783   19545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:21.262320   19545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:21.263990   19545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:21.264498   19545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:21.266022   19545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:21.269462  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:21.269480  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:23.789716  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:23.801130  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:23.820155  687772 logs.go:282] 0 containers: []
	W1223 00:06:23.820180  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:23.820239  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:23.838850  687772 logs.go:282] 0 containers: []
	W1223 00:06:23.838875  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:23.838919  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:23.856860  687772 logs.go:282] 0 containers: []
	W1223 00:06:23.856881  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:23.856931  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:23.874630  687772 logs.go:282] 0 containers: []
	W1223 00:06:23.874653  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:23.874700  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:23.893425  687772 logs.go:282] 0 containers: []
	W1223 00:06:23.893454  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:23.893521  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:23.912712  687772 logs.go:282] 0 containers: []
	W1223 00:06:23.912734  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:23.912789  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:23.931097  687772 logs.go:282] 0 containers: []
	W1223 00:06:23.931124  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:23.931178  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:23.949113  687772 logs.go:282] 0 containers: []
	W1223 00:06:23.949138  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:23.949152  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:23.949168  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:23.996109  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:23.996137  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:24.016228  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:24.016254  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:24.071647  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:24.064286   19696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:24.064800   19696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:24.066333   19696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:24.066786   19696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:24.068314   19696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:24.064286   19696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:24.064800   19696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:24.066333   19696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:24.066786   19696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:24.068314   19696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:24.071665  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:24.071680  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:24.090918  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:24.090944  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:26.624354  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:26.635840  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:26.654444  687772 logs.go:282] 0 containers: []
	W1223 00:06:26.654473  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:26.654537  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:26.673364  687772 logs.go:282] 0 containers: []
	W1223 00:06:26.673388  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:26.673436  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:26.692467  687772 logs.go:282] 0 containers: []
	W1223 00:06:26.692489  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:26.692539  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:26.711627  687772 logs.go:282] 0 containers: []
	W1223 00:06:26.711656  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:26.711709  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:26.730302  687772 logs.go:282] 0 containers: []
	W1223 00:06:26.730332  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:26.730386  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:26.748910  687772 logs.go:282] 0 containers: []
	W1223 00:06:26.748939  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:26.748995  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:26.768525  687772 logs.go:282] 0 containers: []
	W1223 00:06:26.768548  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:26.768603  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:26.788434  687772 logs.go:282] 0 containers: []
	W1223 00:06:26.788462  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:26.788476  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:26.788491  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:26.845463  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:26.838499   19858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:26.838989   19858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:26.840494   19858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:26.840922   19858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:26.842389   19858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:26.838499   19858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:26.838989   19858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:26.840494   19858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:26.840922   19858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:26.842389   19858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:26.845482  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:26.845494  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:26.864140  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:26.864167  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:26.890448  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:26.890476  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:26.937390  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:26.937422  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:29.457766  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:29.469205  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:29.488353  687772 logs.go:282] 0 containers: []
	W1223 00:06:29.488376  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:29.488431  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:29.508035  687772 logs.go:282] 0 containers: []
	W1223 00:06:29.508059  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:29.508114  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:29.528210  687772 logs.go:282] 0 containers: []
	W1223 00:06:29.528234  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:29.528280  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:29.546344  687772 logs.go:282] 0 containers: []
	W1223 00:06:29.546370  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:29.546432  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:29.565125  687772 logs.go:282] 0 containers: []
	W1223 00:06:29.565153  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:29.565200  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:29.584111  687772 logs.go:282] 0 containers: []
	W1223 00:06:29.584142  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:29.584195  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:29.602714  687772 logs.go:282] 0 containers: []
	W1223 00:06:29.602735  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:29.602778  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:29.621012  687772 logs.go:282] 0 containers: []
	W1223 00:06:29.621042  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:29.621058  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:29.621073  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:29.669132  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:29.669168  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:29.689406  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:29.689431  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:29.746681  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:29.739833   20022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:29.740362   20022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:29.741927   20022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:29.742376   20022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:29.743569   20022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:29.739833   20022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:29.740362   20022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:29.741927   20022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:29.742376   20022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:29.743569   20022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:29.746703  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:29.746720  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:29.765762  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:29.765793  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:32.299443  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:32.310848  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:32.330298  687772 logs.go:282] 0 containers: []
	W1223 00:06:32.330326  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:32.330380  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:32.349664  687772 logs.go:282] 0 containers: []
	W1223 00:06:32.349692  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:32.349745  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:32.367944  687772 logs.go:282] 0 containers: []
	W1223 00:06:32.367969  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:32.368081  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:32.386919  687772 logs.go:282] 0 containers: []
	W1223 00:06:32.386940  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:32.386983  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:32.405416  687772 logs.go:282] 0 containers: []
	W1223 00:06:32.405440  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:32.405487  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:32.423080  687772 logs.go:282] 0 containers: []
	W1223 00:06:32.423100  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:32.423144  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:32.441255  687772 logs.go:282] 0 containers: []
	W1223 00:06:32.441282  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:32.441336  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:32.459763  687772 logs.go:282] 0 containers: []
	W1223 00:06:32.459789  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:32.459801  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:32.459812  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:32.507284  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:32.507314  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:32.529983  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:32.530014  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:32.587816  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:32.580635   20192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:32.581177   20192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:32.582743   20192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:32.583222   20192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:32.584764   20192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:32.580635   20192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:32.581177   20192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:32.582743   20192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:32.583222   20192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:32.584764   20192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:32.587843  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:32.587860  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:32.607796  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:32.607826  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:35.136489  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:35.147976  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:35.166774  687772 logs.go:282] 0 containers: []
	W1223 00:06:35.166794  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:35.166846  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:35.185872  687772 logs.go:282] 0 containers: []
	W1223 00:06:35.185899  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:35.185949  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:35.204053  687772 logs.go:282] 0 containers: []
	W1223 00:06:35.204074  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:35.204115  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:35.223056  687772 logs.go:282] 0 containers: []
	W1223 00:06:35.223077  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:35.223126  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:35.241616  687772 logs.go:282] 0 containers: []
	W1223 00:06:35.241645  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:35.241699  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:35.260422  687772 logs.go:282] 0 containers: []
	W1223 00:06:35.260476  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:35.260536  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:35.279168  687772 logs.go:282] 0 containers: []
	W1223 00:06:35.279192  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:35.279238  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:35.297208  687772 logs.go:282] 0 containers: []
	W1223 00:06:35.297236  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:35.297252  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:35.297267  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:35.317273  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:35.317299  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:35.374319  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:35.365790   20361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:35.367609   20361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:35.368076   20361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:35.369665   20361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:35.370105   20361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:35.365790   20361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:35.367609   20361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:35.368076   20361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:35.369665   20361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:35.370105   20361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:35.374337  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:35.374349  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:35.393025  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:35.393050  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:35.420499  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:35.420537  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:37.968117  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:37.979448  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:37.998789  687772 logs.go:282] 0 containers: []
	W1223 00:06:37.998815  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:37.998861  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:38.019815  687772 logs.go:282] 0 containers: []
	W1223 00:06:38.019847  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:38.019910  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:38.042524  687772 logs.go:282] 0 containers: []
	W1223 00:06:38.042552  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:38.042617  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:38.061464  687772 logs.go:282] 0 containers: []
	W1223 00:06:38.061489  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:38.061544  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:38.080482  687772 logs.go:282] 0 containers: []
	W1223 00:06:38.080509  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:38.080558  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:38.099189  687772 logs.go:282] 0 containers: []
	W1223 00:06:38.099215  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:38.099279  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:38.118161  687772 logs.go:282] 0 containers: []
	W1223 00:06:38.118188  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:38.118244  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:38.136752  687772 logs.go:282] 0 containers: []
	W1223 00:06:38.136786  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:38.136803  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:38.136819  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:38.182751  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:38.182779  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:38.202352  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:38.202375  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:38.257901  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:38.250382   20532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:38.251009   20532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:38.252656   20532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:38.253166   20532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:38.254694   20532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:38.250382   20532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:38.251009   20532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:38.252656   20532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:38.253166   20532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:38.254694   20532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:38.257922  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:38.257933  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:38.276963  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:38.276988  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:40.806792  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:40.818244  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:40.837324  687772 logs.go:282] 0 containers: []
	W1223 00:06:40.837348  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:40.837402  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:40.856364  687772 logs.go:282] 0 containers: []
	W1223 00:06:40.856387  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:40.856453  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:40.874753  687772 logs.go:282] 0 containers: []
	W1223 00:06:40.874780  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:40.874831  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:40.893167  687772 logs.go:282] 0 containers: []
	W1223 00:06:40.893193  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:40.893242  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:40.910901  687772 logs.go:282] 0 containers: []
	W1223 00:06:40.910924  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:40.910976  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:40.930108  687772 logs.go:282] 0 containers: []
	W1223 00:06:40.930133  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:40.930191  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:40.949021  687772 logs.go:282] 0 containers: []
	W1223 00:06:40.949047  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:40.949101  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:40.967221  687772 logs.go:282] 0 containers: []
	W1223 00:06:40.967246  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:40.967260  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:40.967276  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:40.988752  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:40.988779  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:41.048349  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:41.040501   20693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:41.041135   20693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:41.042880   20693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:41.043313   20693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:41.044930   20693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:41.040501   20693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:41.041135   20693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:41.042880   20693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:41.043313   20693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:41.044930   20693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:41.048374  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:41.048387  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:41.067112  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:41.067138  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:41.093421  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:41.093445  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:43.639363  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:43.653263  687772 out.go:203] 
	W1223 00:06:43.654345  687772 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1223 00:06:43.654374  687772 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1223 00:06:43.654383  687772 out.go:285] * Related issues:
	W1223 00:06:43.654397  687772 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1223 00:06:43.654411  687772 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1223 00:06:43.655505  687772 out.go:203] 
	
	
	==> Docker <==
	Dec 22 23:56:13 no-preload-063943 dockerd[910]: time="2025-12-22T23:56:13.023421578Z" level=info msg="Restoring containers: start."
	Dec 22 23:56:13 no-preload-063943 dockerd[910]: time="2025-12-22T23:56:13.041299594Z" level=info msg="Deleting nftables IPv4 rules" error="exit status 1"
	Dec 22 23:56:13 no-preload-063943 dockerd[910]: time="2025-12-22T23:56:13.059213099Z" level=info msg="Deleting nftables IPv6 rules" error="exit status 1"
	Dec 22 23:56:13 no-preload-063943 dockerd[910]: time="2025-12-22T23:56:13.611358548Z" level=info msg="Loading containers: done."
	Dec 22 23:56:13 no-preload-063943 dockerd[910]: time="2025-12-22T23:56:13.620920645Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 22 23:56:13 no-preload-063943 dockerd[910]: time="2025-12-22T23:56:13.620957367Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 22 23:56:13 no-preload-063943 dockerd[910]: time="2025-12-22T23:56:13.620991634Z" level=info msg="Initializing buildkit"
	Dec 22 23:56:13 no-preload-063943 dockerd[910]: time="2025-12-22T23:56:13.639509005Z" level=info msg="Completed buildkit initialization"
	Dec 22 23:56:13 no-preload-063943 dockerd[910]: time="2025-12-22T23:56:13.645541635Z" level=info msg="Daemon has completed initialization"
	Dec 22 23:56:13 no-preload-063943 dockerd[910]: time="2025-12-22T23:56:13.645622881Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 22 23:56:13 no-preload-063943 dockerd[910]: time="2025-12-22T23:56:13.645627452Z" level=info msg="API listen on /run/docker.sock"
	Dec 22 23:56:13 no-preload-063943 dockerd[910]: time="2025-12-22T23:56:13.645628833Z" level=info msg="API listen on [::]:2376"
	Dec 22 23:56:13 no-preload-063943 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 22 23:56:14 no-preload-063943 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 22 23:56:14 no-preload-063943 cri-dockerd[1203]: time="2025-12-22T23:56:14Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 22 23:56:14 no-preload-063943 cri-dockerd[1203]: time="2025-12-22T23:56:14Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 22 23:56:14 no-preload-063943 cri-dockerd[1203]: time="2025-12-22T23:56:14Z" level=info msg="Start docker client with request timeout 0s"
	Dec 22 23:56:14 no-preload-063943 cri-dockerd[1203]: time="2025-12-22T23:56:14Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 22 23:56:14 no-preload-063943 cri-dockerd[1203]: time="2025-12-22T23:56:14Z" level=info msg="Loaded network plugin cni"
	Dec 22 23:56:14 no-preload-063943 cri-dockerd[1203]: time="2025-12-22T23:56:14Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 22 23:56:14 no-preload-063943 cri-dockerd[1203]: time="2025-12-22T23:56:14Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 22 23:56:14 no-preload-063943 cri-dockerd[1203]: time="2025-12-22T23:56:14Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 22 23:56:14 no-preload-063943 cri-dockerd[1203]: time="2025-12-22T23:56:14Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 22 23:56:14 no-preload-063943 cri-dockerd[1203]: time="2025-12-22T23:56:14Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 22 23:56:14 no-preload-063943 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:11:19.183553   15785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:11:19.184179   15785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:11:19.185764   15785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:11:19.186227   15785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:11:19.187791   15785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 32 44 b0 85 99 75 08 06
	[  +2.519484] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ca 64 f4 88 60 6a 08 06
	[  +0.000472] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 42 41 81 ba 80 a4 08 06
	[Dec22 23:59] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e 60 1e 9e f0 0c 08 06
	[  +0.088099] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff f6 12 57 26 ed f1 08 06
	[  +5.341024] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 46 24 97 27 5a ed 08 06
	[ +14.537406] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff da 72 df 3b 35 8d 08 06
	[  +0.000388] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 5e 60 1e 9e f0 0c 08 06
	[  +2.465032] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 84 3f 6a 28 22 08 06
	[  +0.000373] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 46 24 97 27 5a ed 08 06
	[Dec23 00:00] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 4e 53 f0 1e af dd 08 06
	[Dec23 00:01] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 20 71 68 66 a5 08 06
	[  +0.000346] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 4e 53 f0 1e af dd 08 06
	
	
	==> kernel <==
	 00:11:19 up  3:53,  0 user,  load average: 0.08, 0.60, 1.24
	Linux no-preload-063943 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 23 00:11:16 no-preload-063943 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 23 00:11:16 no-preload-063943 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1203.
	Dec 23 00:11:16 no-preload-063943 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 23 00:11:16 no-preload-063943 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 23 00:11:16 no-preload-063943 kubelet[15589]: E1223 00:11:16.784759   15589 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 23 00:11:16 no-preload-063943 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 23 00:11:16 no-preload-063943 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 23 00:11:17 no-preload-063943 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1204.
	Dec 23 00:11:17 no-preload-063943 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 23 00:11:17 no-preload-063943 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 23 00:11:17 no-preload-063943 kubelet[15600]: E1223 00:11:17.549035   15600 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 23 00:11:17 no-preload-063943 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 23 00:11:17 no-preload-063943 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 23 00:11:18 no-preload-063943 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1205.
	Dec 23 00:11:18 no-preload-063943 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 23 00:11:18 no-preload-063943 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 23 00:11:18 no-preload-063943 kubelet[15641]: E1223 00:11:18.309147   15641 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 23 00:11:18 no-preload-063943 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 23 00:11:18 no-preload-063943 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 23 00:11:18 no-preload-063943 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1206.
	Dec 23 00:11:18 no-preload-063943 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 23 00:11:18 no-preload-063943 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 23 00:11:19 no-preload-063943 kubelet[15704]: E1223 00:11:19.035060   15704 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 23 00:11:19 no-preload-063943 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 23 00:11:19 no-preload-063943 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-063943 -n no-preload-063943
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-063943 -n no-preload-063943: exit status 2 (299.297507ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "no-preload-063943" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.91s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (6.98s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-348344 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-348344 -n newest-cni-348344
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-348344 -n newest-cni-348344: exit status 2 (297.041946ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-348344 -n newest-cni-348344
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-348344 -n newest-cni-348344: exit status 2 (292.237438ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-348344 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-348344 -n newest-cni-348344
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-348344 -n newest-cni-348344: exit status 2 (292.22151ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-unpause apiserver status = "Stopped"; want = "Running"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-348344 -n newest-cni-348344
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-348344 -n newest-cni-348344: exit status 2 (292.506232ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-unpause kubelet status = "Stopped"; want = "Running"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-348344
helpers_test.go:244: (dbg) docker inspect newest-cni-348344:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "133dc19d84d424ed179e624a54285c88a37ad637a1692732b3536ec0f181551b",
	        "Created": "2025-12-22T23:50:45.124975619Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 687974,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-23T00:00:34.301956639Z",
	            "FinishedAt": "2025-12-23T00:00:32.890201351Z"
	        },
	        "Image": "sha256:9a87e850a5e640dd3e5f71477885272b970ba271e3722be8bebbe0157f704ffd",
	        "ResolvConfPath": "/var/lib/docker/containers/133dc19d84d424ed179e624a54285c88a37ad637a1692732b3536ec0f181551b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/133dc19d84d424ed179e624a54285c88a37ad637a1692732b3536ec0f181551b/hostname",
	        "HostsPath": "/var/lib/docker/containers/133dc19d84d424ed179e624a54285c88a37ad637a1692732b3536ec0f181551b/hosts",
	        "LogPath": "/var/lib/docker/containers/133dc19d84d424ed179e624a54285c88a37ad637a1692732b3536ec0f181551b/133dc19d84d424ed179e624a54285c88a37ad637a1692732b3536ec0f181551b-json.log",
	        "Name": "/newest-cni-348344",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-348344:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-348344",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "133dc19d84d424ed179e624a54285c88a37ad637a1692732b3536ec0f181551b",
	                "LowerDir": "/var/lib/docker/overlay2/6020e8f517a187af8c88e3692b2c53fcf5fcbaeb46fc7b99af192b869c28d41a-init/diff:/var/lib/docker/overlay2/c57dd1a41102d99c4ed6be3c60b871435428bd2cea6a3d8d172f0a67527ba009/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6020e8f517a187af8c88e3692b2c53fcf5fcbaeb46fc7b99af192b869c28d41a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6020e8f517a187af8c88e3692b2c53fcf5fcbaeb46fc7b99af192b869c28d41a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6020e8f517a187af8c88e3692b2c53fcf5fcbaeb46fc7b99af192b869c28d41a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-348344",
	                "Source": "/var/lib/docker/volumes/newest-cni-348344/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-348344",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-348344",
	                "name.minikube.sigs.k8s.io": "newest-cni-348344",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "1d6c9b4cbbb98d27f15b901c20b574a86c3cb628ad2da992c2e0c5437cff03b0",
	            "SandboxKey": "/var/run/docker/netns/1d6c9b4cbbb9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33168"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33169"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33172"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33170"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33171"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-348344": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1020bfe2df349af00e9e2f4197eff27d709a25503c20a26c662019055cba21bb",
	                    "EndpointID": "66b6b308d2bcc6eca28baac06e33fe8d42bbea1f9fe8f1f5ee1a462ebfeba9bc",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "12:46:4e:43:ff:87",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-348344",
	                        "133dc19d84d4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-348344 -n newest-cni-348344
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-348344 -n newest-cni-348344: exit status 2 (292.13954ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-348344 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-348344 logs -n 25: (1.148737421s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                      ARGS                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p kubenet-003676 sudo journalctl -xeu kubelet --all --full --no-pager          │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo cat /etc/kubernetes/kubelet.conf                         │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo cat /var/lib/kubelet/config.yaml                         │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo systemctl status docker --all --full --no-pager          │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo systemctl cat docker --no-pager                          │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo cat /etc/docker/daemon.json                              │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo docker system info                                       │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo systemctl status cri-docker --all --full --no-pager      │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo systemctl cat cri-docker --no-pager                      │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo cat /usr/lib/systemd/system/cri-docker.service           │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo cri-dockerd --version                                    │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo systemctl status containerd --all --full --no-pager      │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo systemctl cat containerd --no-pager                      │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo cat /lib/systemd/system/containerd.service               │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo cat /etc/containerd/config.toml                          │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo containerd config dump                                   │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo systemctl status crio --all --full --no-pager            │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │                     │
	│ ssh     │ -p kubenet-003676 sudo systemctl cat crio --no-pager                            │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;  │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo crio config                                              │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ delete  │ -p kubenet-003676                                                               │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ image   │ newest-cni-348344 image list --format=json                                      │ newest-cni-348344 │ jenkins │ v1.37.0 │ 23 Dec 25 00:06 UTC │ 23 Dec 25 00:06 UTC │
	│ pause   │ -p newest-cni-348344 --alsologtostderr -v=1                                     │ newest-cni-348344 │ jenkins │ v1.37.0 │ 23 Dec 25 00:06 UTC │ 23 Dec 25 00:06 UTC │
	│ unpause │ -p newest-cni-348344 --alsologtostderr -v=1                                     │ newest-cni-348344 │ jenkins │ v1.37.0 │ 23 Dec 25 00:06 UTC │ 23 Dec 25 00:06 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/23 00:00:34
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1223 00:00:34.066824  687772 out.go:360] Setting OutFile to fd 1 ...
	I1223 00:00:34.067051  687772 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1223 00:00:34.067058  687772 out.go:374] Setting ErrFile to fd 2...
	I1223 00:00:34.067063  687772 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1223 00:00:34.067257  687772 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-72233/.minikube/bin
	I1223 00:00:34.067701  687772 out.go:368] Setting JSON to false
	I1223 00:00:34.068753  687772 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":13374,"bootTime":1766434660,"procs":281,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1223 00:00:34.068805  687772 start.go:143] virtualization: kvm guest
	W1223 00:00:29.964565  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:31.965119  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:33.965297  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	I1223 00:00:34.070524  687772 out.go:179] * [newest-cni-348344] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1223 00:00:34.072192  687772 notify.go:221] Checking for updates...
	I1223 00:00:34.072201  687772 out.go:179]   - MINIKUBE_LOCATION=22301
	I1223 00:00:34.073912  687772 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1223 00:00:34.074996  687772 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1223 00:00:34.076047  687772 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-72233/.minikube
	I1223 00:00:34.077175  687772 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1223 00:00:34.078295  687772 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1223 00:00:34.079882  687772 config.go:182] Loaded profile config "newest-cni-348344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1223 00:00:34.080446  687772 driver.go:422] Setting default libvirt URI to qemu:///system
	I1223 00:00:34.106101  687772 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1223 00:00:34.106213  687772 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1223 00:00:34.161275  687772 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:64 SystemTime:2025-12-23 00:00:34.151129133 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1223 00:00:34.161373  687772 docker.go:319] overlay module found
	I1223 00:00:34.163775  687772 out.go:179] * Using the docker driver based on existing profile
	I1223 00:00:34.164711  687772 start.go:309] selected driver: docker
	I1223 00:00:34.164723  687772 start.go:928] validating driver "docker" against &{Name:newest-cni-348344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-348344 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1223 00:00:34.164829  687772 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1223 00:00:34.165640  687772 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1223 00:00:34.231050  687772 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:64 SystemTime:2025-12-23 00:00:34.221418272 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1223 00:00:34.231362  687772 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1223 00:00:34.231388  687772 cni.go:84] Creating CNI manager for ""
	I1223 00:00:34.231454  687772 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1223 00:00:34.231489  687772 start.go:353] cluster config:
	{Name:newest-cni-348344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-348344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1223 00:00:34.233188  687772 out.go:179] * Starting "newest-cni-348344" primary control-plane node in "newest-cni-348344" cluster
	I1223 00:00:34.234320  687772 cache.go:134] Beginning downloading kic base image for docker with docker
	I1223 00:00:34.235503  687772 out.go:179] * Pulling base image v0.0.48-1766394456-22288 ...
	I1223 00:00:34.236442  687772 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1223 00:00:34.236471  687772 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4
	I1223 00:00:34.236486  687772 cache.go:65] Caching tarball of preloaded images
	I1223 00:00:34.236540  687772 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local docker daemon
	I1223 00:00:34.236575  687772 preload.go:251] Found /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1223 00:00:34.236586  687772 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on docker
	I1223 00:00:34.236749  687772 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/config.json ...
	I1223 00:00:34.256540  687772 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local docker daemon, skipping pull
	I1223 00:00:34.256557  687772 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 exists in daemon, skipping load
	I1223 00:00:34.256572  687772 cache.go:243] Successfully downloaded all kic artifacts
	I1223 00:00:34.256623  687772 start.go:360] acquireMachinesLock for newest-cni-348344: {Name:mk26cd248e0bcd2d8f2e8a824868ba7de6c9c6f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1223 00:00:34.256695  687772 start.go:364] duration metric: took 39.918µs to acquireMachinesLock for "newest-cni-348344"
	I1223 00:00:34.256714  687772 start.go:96] Skipping create...Using existing machine configuration
	I1223 00:00:34.256719  687772 fix.go:54] fixHost starting: 
	I1223 00:00:34.256918  687772 cli_runner.go:164] Run: docker container inspect newest-cni-348344 --format={{.State.Status}}
	I1223 00:00:34.273955  687772 fix.go:112] recreateIfNeeded on newest-cni-348344: state=Stopped err=<nil>
	W1223 00:00:34.273978  687772 fix.go:138] unexpected machine state, will restart: <nil>
	W1223 00:00:31.998314  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:34.497435  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:36.498048  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:00:34.276035  687772 out.go:252] * Restarting existing docker container for "newest-cni-348344" ...
	I1223 00:00:34.276100  687772 cli_runner.go:164] Run: docker start newest-cni-348344
	I1223 00:00:34.520995  687772 cli_runner.go:164] Run: docker container inspect newest-cni-348344 --format={{.State.Status}}
	I1223 00:00:34.540249  687772 kic.go:430] container "newest-cni-348344" state is running.
	I1223 00:00:34.540736  687772 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-348344
	I1223 00:00:34.560319  687772 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/config.json ...
	I1223 00:00:34.560718  687772 machine.go:94] provisionDockerMachine start ...
	I1223 00:00:34.560825  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:34.581907  687772 main.go:144] libmachine: Using SSH client type: native
	I1223 00:00:34.582194  687772 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33168 <nil> <nil>}
	I1223 00:00:34.582211  687772 main.go:144] libmachine: About to run SSH command:
	hostname
	I1223 00:00:34.583095  687772 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41172->127.0.0.1:33168: read: connection reset by peer
	I1223 00:00:37.726578  687772 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-348344
	
	I1223 00:00:37.726621  687772 ubuntu.go:182] provisioning hostname "newest-cni-348344"
	I1223 00:00:37.726764  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:37.746947  687772 main.go:144] libmachine: Using SSH client type: native
	I1223 00:00:37.747183  687772 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33168 <nil> <nil>}
	I1223 00:00:37.747203  687772 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-348344 && echo "newest-cni-348344" | sudo tee /etc/hostname
	I1223 00:00:37.900818  687772 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-348344
	
	I1223 00:00:37.900900  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:37.919317  687772 main.go:144] libmachine: Using SSH client type: native
	I1223 00:00:37.919561  687772 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33168 <nil> <nil>}
	I1223 00:00:37.919579  687772 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-348344' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-348344/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-348344' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1223 00:00:38.062239  687772 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1223 00:00:38.062284  687772 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22301-72233/.minikube CaCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22301-72233/.minikube}
	I1223 00:00:38.062331  687772 ubuntu.go:190] setting up certificates
	I1223 00:00:38.062344  687772 provision.go:84] configureAuth start
	I1223 00:00:38.062400  687772 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-348344
	I1223 00:00:38.081263  687772 provision.go:143] copyHostCerts
	I1223 00:00:38.081355  687772 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem, removing ...
	I1223 00:00:38.081386  687772 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem
	I1223 00:00:38.081497  687772 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem (1082 bytes)
	I1223 00:00:38.081760  687772 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem, removing ...
	I1223 00:00:38.081785  687772 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem
	I1223 00:00:38.081851  687772 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem (1123 bytes)
	I1223 00:00:38.082007  687772 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem, removing ...
	I1223 00:00:38.082027  687772 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem
	I1223 00:00:38.082101  687772 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem (1679 bytes)
	I1223 00:00:38.082238  687772 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem org=jenkins.newest-cni-348344 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-348344]
	I1223 00:00:38.170695  687772 provision.go:177] copyRemoteCerts
	I1223 00:00:38.170759  687772 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1223 00:00:38.170815  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:38.189123  687772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/newest-cni-348344/id_rsa Username:docker}
	I1223 00:00:38.291920  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1223 00:00:38.309166  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1223 00:00:38.326166  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1223 00:00:38.344564  687772 provision.go:87] duration metric: took 282.199681ms to configureAuth
	I1223 00:00:38.344627  687772 ubuntu.go:206] setting minikube options for container-runtime
	I1223 00:00:38.344911  687772 config.go:182] Loaded profile config "newest-cni-348344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1223 00:00:38.344995  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:38.366260  687772 main.go:144] libmachine: Using SSH client type: native
	I1223 00:00:38.366529  687772 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33168 <nil> <nil>}
	I1223 00:00:38.366545  687772 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1223 00:00:38.510728  687772 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1223 00:00:38.510754  687772 ubuntu.go:71] root file system type: overlay
	I1223 00:00:38.510908  687772 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1223 00:00:38.510979  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:38.532018  687772 main.go:144] libmachine: Using SSH client type: native
	I1223 00:00:38.532329  687772 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33168 <nil> <nil>}
	I1223 00:00:38.532458  687772 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1223 00:00:38.686369  687772 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1223 00:00:38.686443  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:38.704974  687772 main.go:144] libmachine: Using SSH client type: native
	I1223 00:00:38.705187  687772 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33168 <nil> <nil>}
	I1223 00:00:38.705204  687772 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1223 00:00:38.853249  687772 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1223 00:00:38.853285  687772 machine.go:97] duration metric: took 4.292539002s to provisionDockerMachine
	I1223 00:00:38.853303  687772 start.go:293] postStartSetup for "newest-cni-348344" (driver="docker")
	I1223 00:00:38.853321  687772 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1223 00:00:38.853419  687772 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1223 00:00:38.853487  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:38.872421  687772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/newest-cni-348344/id_rsa Username:docker}
	I1223 00:00:38.979494  687772 ssh_runner.go:195] Run: cat /etc/os-release
	I1223 00:00:38.983309  687772 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1223 00:00:38.983340  687772 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1223 00:00:38.983352  687772 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-72233/.minikube/addons for local assets ...
	I1223 00:00:38.983401  687772 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-72233/.minikube/files for local assets ...
	I1223 00:00:38.983478  687772 filesync.go:149] local asset: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem -> 758032.pem in /etc/ssl/certs
	I1223 00:00:38.983566  687772 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1223 00:00:38.991328  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem --> /etc/ssl/certs/758032.pem (1708 bytes)
	I1223 00:00:39.009038  687772 start.go:296] duration metric: took 155.718049ms for postStartSetup
	I1223 00:00:39.009116  687772 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1223 00:00:39.009156  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:39.028095  687772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/newest-cni-348344/id_rsa Username:docker}
	W1223 00:00:36.464751  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:38.465798  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	I1223 00:00:39.126878  687772 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1223 00:00:39.131527  687772 fix.go:56] duration metric: took 4.874800463s for fixHost
	I1223 00:00:39.131555  687772 start.go:83] releasing machines lock for "newest-cni-348344", held for 4.874847678s
	I1223 00:00:39.131653  687772 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-348344
	I1223 00:00:39.149572  687772 ssh_runner.go:195] Run: cat /version.json
	I1223 00:00:39.149644  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:39.149684  687772 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1223 00:00:39.149791  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:39.169897  687772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/newest-cni-348344/id_rsa Username:docker}
	I1223 00:00:39.170262  687772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/newest-cni-348344/id_rsa Username:docker}
	I1223 00:00:39.329086  687772 ssh_runner.go:195] Run: systemctl --version
	I1223 00:00:39.336405  687772 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1223 00:00:39.341291  687772 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1223 00:00:39.341348  687772 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1223 00:00:39.349279  687772 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1223 00:00:39.349306  687772 start.go:496] detecting cgroup driver to use...
	I1223 00:00:39.349351  687772 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1223 00:00:39.349509  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1223 00:00:39.363512  687772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1223 00:00:39.372815  687772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1223 00:00:39.381448  687772 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1223 00:00:39.381508  687772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1223 00:00:39.390109  687772 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1223 00:00:39.399162  687772 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1223 00:00:39.408060  687772 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1223 00:00:39.416584  687772 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1223 00:00:39.424711  687772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1223 00:00:39.433432  687772 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1223 00:00:39.442056  687772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1223 00:00:39.450905  687772 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1223 00:00:39.458512  687772 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1223 00:00:39.466991  687772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1223 00:00:39.550442  687772 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1223 00:00:39.632902  687772 start.go:496] detecting cgroup driver to use...
	I1223 00:00:39.632954  687772 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1223 00:00:39.633002  687772 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1223 00:00:39.646974  687772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1223 00:00:39.659240  687772 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1223 00:00:39.675979  687772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1223 00:00:39.688406  687772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1223 00:00:39.700912  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1223 00:00:39.715235  687772 ssh_runner.go:195] Run: which cri-dockerd
	I1223 00:00:39.718945  687772 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1223 00:00:39.726972  687772 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1223 00:00:39.739559  687772 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1223 00:00:39.825824  687772 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1223 00:00:39.909620  687772 docker.go:578] configuring docker to use "cgroupfs" as cgroup driver...
	I1223 00:00:39.909743  687772 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1223 00:00:39.923453  687772 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1223 00:00:39.935401  687772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1223 00:00:40.031388  687772 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1223 00:00:40.779801  687772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1223 00:00:40.794700  687772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1223 00:00:40.807727  687772 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1223 00:00:40.821730  687772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1223 00:00:40.835038  687772 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1223 00:00:40.917554  687772 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1223 00:00:41.006720  687772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1223 00:00:41.090943  687772 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1223 00:00:41.119047  687772 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1223 00:00:41.131277  687772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1223 00:00:41.215437  687772 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1223 00:00:41.283622  687772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1223 00:00:41.298485  687772 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1223 00:00:41.298551  687772 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1223 00:00:41.302554  687772 start.go:564] Will wait 60s for crictl version
	I1223 00:00:41.302645  687772 ssh_runner.go:195] Run: which crictl
	I1223 00:00:41.306307  687772 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1223 00:00:41.332425  687772 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1223 00:00:41.332495  687772 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1223 00:00:41.358777  687772 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1223 00:00:41.385668  687772 out.go:252] * Preparing Kubernetes v1.35.0-rc.1 on Docker 29.1.3 ...
	I1223 00:00:41.385749  687772 cli_runner.go:164] Run: docker network inspect newest-cni-348344 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1223 00:00:41.403696  687772 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1223 00:00:41.407961  687772 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1223 00:00:41.419714  687772 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1223 00:00:38.997749  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:40.998211  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:00:41.420647  687772 kubeadm.go:884] updating cluster {Name:newest-cni-348344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-348344 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1223 00:00:41.420848  687772 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1223 00:00:41.420937  687772 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1223 00:00:41.444049  687772 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1223 00:00:41.444075  687772 docker.go:624] Images already preloaded, skipping extraction
	I1223 00:00:41.444128  687772 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1223 00:00:41.466181  687772 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1223 00:00:41.466206  687772 cache_images.go:86] Images are preloaded, skipping loading
	I1223 00:00:41.466215  687772 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0-rc.1 docker true true} ...
	I1223 00:00:41.466312  687772 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-348344 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-348344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1223 00:00:41.466372  687772 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1223 00:00:41.520065  687772 cni.go:84] Creating CNI manager for ""
	I1223 00:00:41.520097  687772 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1223 00:00:41.520116  687772 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1223 00:00:41.520151  687772 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-348344 NodeName:newest-cni-348344 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1223 00:00:41.520280  687772 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-348344"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1223 00:00:41.520348  687772 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1223 00:00:41.529909  687772 binaries.go:51] Found k8s binaries, skipping transfer
	I1223 00:00:41.529987  687772 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1223 00:00:41.538670  687772 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1223 00:00:41.552566  687772 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1223 00:00:41.565766  687772 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1223 00:00:41.578604  687772 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1223 00:00:41.582388  687772 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1223 00:00:41.592239  687772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1223 00:00:41.689716  687772 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1223 00:00:41.716265  687772 certs.go:69] Setting up /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344 for IP: 192.168.94.2
	I1223 00:00:41.716293  687772 certs.go:195] generating shared ca certs ...
	I1223 00:00:41.716315  687772 certs.go:227] acquiring lock for ca certs: {Name:mk952cc8302daab7c0050aedd5db4002f6808128 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1223 00:00:41.716492  687772 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.key
	I1223 00:00:41.716548  687772 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.key
	I1223 00:00:41.716557  687772 certs.go:257] generating profile certs ...
	I1223 00:00:41.716731  687772 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/client.key
	I1223 00:00:41.716814  687772 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/apiserver.key.3654ac73
	I1223 00:00:41.716864  687772 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/proxy-client.key
	I1223 00:00:41.716992  687772 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803.pem (1338 bytes)
	W1223 00:00:41.717032  687772 certs.go:480] ignoring /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803_empty.pem, impossibly tiny 0 bytes
	I1223 00:00:41.717041  687772 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem (1675 bytes)
	I1223 00:00:41.717076  687772 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem (1082 bytes)
	I1223 00:00:41.717110  687772 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem (1123 bytes)
	I1223 00:00:41.717142  687772 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem (1679 bytes)
	I1223 00:00:41.717206  687772 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem (1708 bytes)
	I1223 00:00:41.718210  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1223 00:00:41.739858  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1223 00:00:41.759304  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1223 00:00:41.777433  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1223 00:00:41.794878  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1223 00:00:41.811695  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1223 00:00:41.829786  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1223 00:00:41.846666  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1223 00:00:41.863546  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem --> /usr/share/ca-certificates/758032.pem (1708 bytes)
	I1223 00:00:41.880762  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1223 00:00:41.898275  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803.pem --> /usr/share/ca-certificates/75803.pem (1338 bytes)
	I1223 00:00:41.915259  687772 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1223 00:00:41.927541  687772 ssh_runner.go:195] Run: openssl version
	I1223 00:00:41.933577  687772 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1223 00:00:41.940758  687772 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1223 00:00:41.948346  687772 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1223 00:00:41.952096  687772 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 22:33 /usr/share/ca-certificates/minikubeCA.pem
	I1223 00:00:41.952140  687772 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1223 00:00:41.987400  687772 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1223 00:00:41.995158  687772 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/75803.pem
	I1223 00:00:42.002657  687772 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/75803.pem /etc/ssl/certs/75803.pem
	I1223 00:00:42.010346  687772 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75803.pem
	I1223 00:00:42.014050  687772 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 22:42 /usr/share/ca-certificates/75803.pem
	I1223 00:00:42.014097  687772 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75803.pem
	I1223 00:00:42.048067  687772 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1223 00:00:42.055784  687772 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/758032.pem
	I1223 00:00:42.063393  687772 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/758032.pem /etc/ssl/certs/758032.pem
	I1223 00:00:42.071081  687772 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/758032.pem
	I1223 00:00:42.075446  687772 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 22:42 /usr/share/ca-certificates/758032.pem
	I1223 00:00:42.075515  687772 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/758032.pem
	I1223 00:00:42.110438  687772 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1223 00:00:42.118365  687772 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1223 00:00:42.122225  687772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1223 00:00:42.157294  687772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1223 00:00:42.192810  687772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1223 00:00:42.226497  687772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1223 00:00:42.261017  687772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1223 00:00:42.299910  687772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1223 00:00:42.335335  687772 kubeadm.go:401] StartCluster: {Name:newest-cni-348344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-348344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1223 00:00:42.335492  687772 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1223 00:00:42.355576  687772 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1223 00:00:42.364792  687772 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1223 00:00:42.364813  687772 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1223 00:00:42.364868  687772 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1223 00:00:42.373141  687772 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1223 00:00:42.374088  687772 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-348344" does not appear in /home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1223 00:00:42.374674  687772 kubeconfig.go:62] /home/jenkins/minikube-integration/22301-72233/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-348344" cluster setting kubeconfig missing "newest-cni-348344" context setting]
	I1223 00:00:42.375665  687772 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/kubeconfig: {Name:mkabb5ea92c3fe748f610038efb5c58128364c71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1223 00:00:42.377667  687772 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1223 00:00:42.385784  687772 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1223 00:00:42.385817  687772 kubeadm.go:602] duration metric: took 20.99658ms to restartPrimaryControlPlane
	I1223 00:00:42.385829  687772 kubeadm.go:403] duration metric: took 50.508354ms to StartCluster
	I1223 00:00:42.385848  687772 settings.go:142] acquiring lock: {Name:mk05aa406dacdbba79fec0b7e7f355491ea46bf8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1223 00:00:42.385918  687772 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1223 00:00:42.387577  687772 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/kubeconfig: {Name:mkabb5ea92c3fe748f610038efb5c58128364c71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1223 00:00:42.387909  687772 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1223 00:00:42.388038  687772 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1223 00:00:42.388131  687772 config.go:182] Loaded profile config "newest-cni-348344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1223 00:00:42.388136  687772 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-348344"
	I1223 00:00:42.388154  687772 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-348344"
	I1223 00:00:42.388170  687772 addons.go:70] Setting dashboard=true in profile "newest-cni-348344"
	I1223 00:00:42.388189  687772 addons.go:70] Setting default-storageclass=true in profile "newest-cni-348344"
	I1223 00:00:42.388208  687772 addons.go:239] Setting addon dashboard=true in "newest-cni-348344"
	W1223 00:00:42.388222  687772 addons.go:248] addon dashboard should already be in state true
	I1223 00:00:42.388226  687772 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-348344"
	I1223 00:00:42.388262  687772 host.go:66] Checking if "newest-cni-348344" exists ...
	I1223 00:00:42.388184  687772 host.go:66] Checking if "newest-cni-348344" exists ...
	I1223 00:00:42.388611  687772 cli_runner.go:164] Run: docker container inspect newest-cni-348344 --format={{.State.Status}}
	I1223 00:00:42.388764  687772 cli_runner.go:164] Run: docker container inspect newest-cni-348344 --format={{.State.Status}}
	I1223 00:00:42.388771  687772 cli_runner.go:164] Run: docker container inspect newest-cni-348344 --format={{.State.Status}}
	I1223 00:00:42.390679  687772 out.go:179] * Verifying Kubernetes components...
	I1223 00:00:42.391709  687772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1223 00:00:42.411960  687772 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1223 00:00:42.411964  687772 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1223 00:00:42.412088  687772 addons.go:239] Setting addon default-storageclass=true in "newest-cni-348344"
	I1223 00:00:42.412122  687772 host.go:66] Checking if "newest-cni-348344" exists ...
	I1223 00:00:42.412456  687772 cli_runner.go:164] Run: docker container inspect newest-cni-348344 --format={{.State.Status}}
	I1223 00:00:42.413023  687772 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1223 00:00:42.413043  687772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1223 00:00:42.413094  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:42.416720  687772 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1223 00:00:42.418014  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1223 00:00:42.418038  687772 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1223 00:00:42.418101  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:42.435878  687772 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1223 00:00:42.435898  687772 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1223 00:00:42.435951  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:42.437317  687772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/newest-cni-348344/id_rsa Username:docker}
	I1223 00:00:42.440138  687772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/newest-cni-348344/id_rsa Username:docker}
	I1223 00:00:42.468807  687772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/newest-cni-348344/id_rsa Username:docker}
	I1223 00:00:42.547260  687772 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1223 00:00:42.600477  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1223 00:00:42.600501  687772 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1223 00:00:42.601363  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1223 00:00:42.607651  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1223 00:00:42.614184  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1223 00:00:42.614204  687772 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1223 00:00:42.628125  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1223 00:00:42.628154  687772 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1223 00:00:42.643093  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1223 00:00:42.643121  687772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1223 00:00:42.702024  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1223 00:00:42.702052  687772 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1223 00:00:42.716211  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1223 00:00:42.716238  687772 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1223 00:00:42.729547  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1223 00:00:42.729569  687772 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1223 00:00:42.742218  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1223 00:00:42.742246  687772 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1223 00:00:42.754716  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1223 00:00:42.754741  687772 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1223 00:00:42.767403  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1223 00:00:43.251127  687772 api_server.go:52] waiting for apiserver process to appear ...
	W1223 00:00:43.251190  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1223 00:00:43.251231  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:43.251243  687772 retry.go:84] will retry after 100ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:43.251206  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:00:43.251536  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:43.392258  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:00:43.445582  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:43.478824  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1223 00:00:43.509277  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:00:43.537617  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1223 00:00:43.565023  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:43.661224  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:00:43.715310  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:43.751478  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:44.004029  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:00:44.062136  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1223 00:00:40.965708  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:43.465160  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:43.498161  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:45.997960  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:00:44.088452  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:00:44.143262  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:44.251335  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:44.383985  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1223 00:00:44.396693  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:00:44.439768  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1223 00:00:44.455552  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:44.752205  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:44.902917  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:00:44.962468  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:45.251805  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:45.326701  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:00:45.400433  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:45.424647  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:00:45.480956  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:45.751310  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:45.799387  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:00:45.854148  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:46.251658  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:46.752208  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:46.836006  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:00:46.890186  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:46.995326  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:00:47.057423  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:47.251702  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:47.426474  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:00:47.485952  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:47.752131  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:48.207567  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1223 00:00:48.251315  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:00:48.262862  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:48.519836  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:00:48.578806  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:48.752112  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:00:45.465342  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:47.465739  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:47.998170  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:50.498122  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:00:49.251876  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:49.751844  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:49.890160  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:00:49.947020  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:50.066047  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:00:50.120129  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:50.251336  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:50.558241  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:00:50.615271  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:50.751572  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:51.252351  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:51.751913  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:52.251786  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:52.752327  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:52.791985  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:00:52.850108  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:53.251446  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:53.751646  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:00:49.964544  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:51.964692  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:53.964842  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:52.997659  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:54.998340  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:00:54.252244  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:54.751444  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:55.251848  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:55.347206  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:00:55.401615  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:55.751830  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:55.936027  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:00:55.995088  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:56.251432  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:56.751824  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:57.111686  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:00:57.169648  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:57.251735  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:57.751676  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:58.251279  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:58.751377  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:00:55.965091  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:58.465307  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	I1223 00:00:59.465067  679852 pod_ready.go:94] pod "coredns-66bc5c9577-v4sr7" is "Ready"
	I1223 00:00:59.465093  679852 pod_ready.go:86] duration metric: took 31.505726579s for pod "coredns-66bc5c9577-v4sr7" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:00:59.467499  679852 pod_ready.go:83] waiting for pod "etcd-kubenet-003676" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:00:59.471040  679852 pod_ready.go:94] pod "etcd-kubenet-003676" is "Ready"
	I1223 00:00:59.471063  679852 pod_ready.go:86] duration metric: took 3.544638ms for pod "etcd-kubenet-003676" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:00:59.472907  679852 pod_ready.go:83] waiting for pod "kube-apiserver-kubenet-003676" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:00:59.476385  679852 pod_ready.go:94] pod "kube-apiserver-kubenet-003676" is "Ready"
	I1223 00:00:59.476406  679852 pod_ready.go:86] duration metric: took 3.481083ms for pod "kube-apiserver-kubenet-003676" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:00:59.478385  679852 pod_ready.go:83] waiting for pod "kube-controller-manager-kubenet-003676" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:00:59.663149  679852 pod_ready.go:94] pod "kube-controller-manager-kubenet-003676" is "Ready"
	I1223 00:00:59.663178  679852 pod_ready.go:86] duration metric: took 184.769862ms for pod "kube-controller-manager-kubenet-003676" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:00:59.863586  679852 pod_ready.go:83] waiting for pod "kube-proxy-4ftjm" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:01:00.263634  679852 pod_ready.go:94] pod "kube-proxy-4ftjm" is "Ready"
	I1223 00:01:00.263661  679852 pod_ready.go:86] duration metric: took 400.030267ms for pod "kube-proxy-4ftjm" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:01:00.464316  679852 pod_ready.go:83] waiting for pod "kube-scheduler-kubenet-003676" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:01:00.863672  679852 pod_ready.go:94] pod "kube-scheduler-kubenet-003676" is "Ready"
	I1223 00:01:00.863704  679852 pod_ready.go:86] duration metric: took 399.359894ms for pod "kube-scheduler-kubenet-003676" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:01:00.863716  679852 pod_ready.go:40] duration metric: took 32.907880274s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1223 00:01:00.909769  679852 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1223 00:01:00.911549  679852 out.go:179] * Done! kubectl is now configured to use "kubenet-003676" cluster and "default" namespace by default
	W1223 00:00:57.497653  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:59.497943  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:00:59.251660  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:59.751533  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:00.252079  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:00.347519  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:01:00.405959  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:00.406017  687772 retry.go:84] will retry after 5.4s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:00.564176  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:01:00.620480  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:00.751684  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:01.251318  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:01.751824  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:02.252159  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:02.751813  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:03.251679  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:03.752247  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:01:01.997413  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:03.998103  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:06.498109  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:04.252308  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:04.751451  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:05.252117  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:05.751808  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:05.759328  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:01:05.824086  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:05.824133  687772 retry.go:84] will retry after 18.8s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:06.105622  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:01:06.162303  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:06.251478  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:06.751336  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:07.251717  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:07.751827  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:08.251532  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:08.751779  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:01:08.998157  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:11.498214  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:09.251540  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:09.503324  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:01:09.562697  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:09.562742  687772 retry.go:84] will retry after 15.6s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:09.751986  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:10.251441  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:10.751356  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:11.251389  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:11.752279  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:12.251802  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:12.751324  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:13.251818  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:13.751866  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:01:13.998099  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:16.497424  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:14.252139  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:14.751583  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:15.252310  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:15.751625  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:16.251842  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:16.751830  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:17.252084  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:17.751783  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:18.251376  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:18.751964  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:01:18.498157  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:20.998023  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:19.252110  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:19.751822  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:20.251704  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:20.568697  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:01:20.639449  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:20.639500  687772 retry.go:84] will retry after 12s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:20.751734  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:21.252249  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:21.751801  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:22.251412  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:22.751366  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:23.252197  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:23.751273  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:01:23.498226  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:25.998233  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:24.252133  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:24.590346  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:01:24.662633  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:24.662674  687772 retry.go:84] will retry after 26.5s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:24.751773  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:25.211650  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1223 00:01:25.252110  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:01:25.284581  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:25.752154  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:26.251728  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:26.751953  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:27.251838  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:27.751740  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:28.251778  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:28.752182  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:01:28.498149  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:30.998068  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:29.252321  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:29.752290  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:30.251523  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:30.752205  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:31.251957  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:31.751675  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:32.251501  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:32.610650  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:01:32.668272  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:32.668321  687772 retry.go:84] will retry after 25.8s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:32.751294  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:33.251566  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:33.751514  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:01:33.497906  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:35.997708  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:34.251717  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:34.751347  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:35.251743  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:35.751789  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:36.251397  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:36.751367  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:37.251392  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:37.751873  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:38.251848  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:38.751798  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:01:38.498282  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:40.998196  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:39.251316  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:39.752288  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:40.251339  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:40.751372  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:41.252071  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:41.752308  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:42.252071  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:42.752021  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:01:42.776761  687772 logs.go:282] 0 containers: []
	W1223 00:01:42.776791  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:01:42.776839  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:01:42.797079  687772 logs.go:282] 0 containers: []
	W1223 00:01:42.797110  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:01:42.797168  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:01:42.817812  687772 logs.go:282] 0 containers: []
	W1223 00:01:42.817839  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:01:42.817907  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:01:42.837835  687772 logs.go:282] 0 containers: []
	W1223 00:01:42.837868  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:01:42.837924  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:01:42.858165  687772 logs.go:282] 0 containers: []
	W1223 00:01:42.858188  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:01:42.858231  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:01:42.878211  687772 logs.go:282] 0 containers: []
	W1223 00:01:42.878236  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:01:42.878281  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:01:42.897739  687772 logs.go:282] 0 containers: []
	W1223 00:01:42.897762  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:01:42.897817  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:01:42.918314  687772 logs.go:282] 0 containers: []
	W1223 00:01:42.918340  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:01:42.918352  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:01:42.918362  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:01:42.965734  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:01:42.965766  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:01:42.986555  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:01:42.986585  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:01:43.052069  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:01:43.042198    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:43.042815    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:43.044868    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:43.045810    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:43.047451    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:01:43.042198    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:43.042815    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:43.044868    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:43.045810    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:43.047451    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:01:43.052092  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:01:43.052108  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:01:43.072511  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:01:43.072542  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1223 00:01:43.497482  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:45.497538  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:45.620354  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:45.631856  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:01:45.651448  687772 logs.go:282] 0 containers: []
	W1223 00:01:45.651473  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:01:45.651528  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:01:45.671204  687772 logs.go:282] 0 containers: []
	W1223 00:01:45.671229  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:01:45.671279  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:01:45.690397  687772 logs.go:282] 0 containers: []
	W1223 00:01:45.690418  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:01:45.690470  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:01:45.711316  687772 logs.go:282] 0 containers: []
	W1223 00:01:45.711342  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:01:45.711400  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:01:45.731731  687772 logs.go:282] 0 containers: []
	W1223 00:01:45.731755  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:01:45.731812  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:01:45.751415  687772 logs.go:282] 0 containers: []
	W1223 00:01:45.751442  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:01:45.751500  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:01:45.770135  687772 logs.go:282] 0 containers: []
	W1223 00:01:45.770157  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:01:45.770202  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:01:45.789088  687772 logs.go:282] 0 containers: []
	W1223 00:01:45.789114  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:01:45.789127  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:01:45.789139  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:01:45.819714  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:01:45.819741  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:01:45.867983  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:01:45.868013  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:01:45.888018  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:01:45.888051  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:01:45.945247  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:01:45.937257    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:45.937924    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:45.939479    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:45.940165    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:45.941893    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:01:45.937257    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:45.937924    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:45.939479    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:45.940165    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:45.941893    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:01:45.945270  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:01:45.945286  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:01:47.390289  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:01:47.445750  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:47.445788  687772 retry.go:84] will retry after 18.6s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:48.464795  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:48.476515  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:01:48.499002  687772 logs.go:282] 0 containers: []
	W1223 00:01:48.499028  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:01:48.499082  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:01:48.523130  687772 logs.go:282] 0 containers: []
	W1223 00:01:48.523165  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:01:48.523225  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:01:48.547882  687772 logs.go:282] 0 containers: []
	W1223 00:01:48.547904  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:01:48.547950  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:01:48.567534  687772 logs.go:282] 0 containers: []
	W1223 00:01:48.567560  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:01:48.567627  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:01:48.590197  687772 logs.go:282] 0 containers: []
	W1223 00:01:48.590222  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:01:48.590280  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:01:48.609279  687772 logs.go:282] 0 containers: []
	W1223 00:01:48.609306  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:01:48.609357  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:01:48.628700  687772 logs.go:282] 0 containers: []
	W1223 00:01:48.628731  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:01:48.628795  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:01:48.648378  687772 logs.go:282] 0 containers: []
	W1223 00:01:48.648402  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:01:48.648416  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:01:48.648433  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:01:48.672700  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:01:48.672741  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:01:48.743864  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:01:48.735937    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:48.736663    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:48.737624    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:48.739145    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:48.739683    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:01:48.735937    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:48.736663    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:48.737624    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:48.739145    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:48.739683    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:01:48.743885  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:01:48.743897  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:01:48.762911  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:01:48.762941  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:01:48.793205  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:01:48.793235  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1223 00:01:47.498181  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:49.998144  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:51.128346  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:01:51.182480  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:51.182522  687772 retry.go:84] will retry after 29.5s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:51.341781  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:51.353222  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:01:51.373421  687772 logs.go:282] 0 containers: []
	W1223 00:01:51.373447  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:01:51.373497  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:01:51.392557  687772 logs.go:282] 0 containers: []
	W1223 00:01:51.392613  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:01:51.392675  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:01:51.411939  687772 logs.go:282] 0 containers: []
	W1223 00:01:51.411964  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:01:51.412018  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:01:51.430968  687772 logs.go:282] 0 containers: []
	W1223 00:01:51.430998  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:01:51.431054  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:01:51.451234  687772 logs.go:282] 0 containers: []
	W1223 00:01:51.451266  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:01:51.451329  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:01:51.470904  687772 logs.go:282] 0 containers: []
	W1223 00:01:51.470947  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:01:51.471017  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:01:51.490121  687772 logs.go:282] 0 containers: []
	W1223 00:01:51.490146  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:01:51.490201  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:01:51.511843  687772 logs.go:282] 0 containers: []
	W1223 00:01:51.511875  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:01:51.511891  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:01:51.511906  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:01:51.545106  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:01:51.545138  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:01:51.594541  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:01:51.594576  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:01:51.615455  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:01:51.615485  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:01:51.680061  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:01:51.672760    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:51.673328    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:51.674906    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:51.675511    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:51.677002    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:01:51.672760    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:51.673328    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:51.674906    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:51.675511    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:51.677002    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:01:51.680080  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:01:51.680091  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1223 00:01:52.497841  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:54.498062  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:56.498119  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:54.199432  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:54.211004  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:01:54.230469  687772 logs.go:282] 0 containers: []
	W1223 00:01:54.230498  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:01:54.230548  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:01:54.251212  687772 logs.go:282] 0 containers: []
	W1223 00:01:54.251245  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:01:54.251300  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:01:54.274147  687772 logs.go:282] 0 containers: []
	W1223 00:01:54.274177  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:01:54.274238  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:01:54.297381  687772 logs.go:282] 0 containers: []
	W1223 00:01:54.297413  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:01:54.297471  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:01:54.316290  687772 logs.go:282] 0 containers: []
	W1223 00:01:54.316315  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:01:54.316362  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:01:54.335315  687772 logs.go:282] 0 containers: []
	W1223 00:01:54.335339  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:01:54.335393  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:01:54.354058  687772 logs.go:282] 0 containers: []
	W1223 00:01:54.354089  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:01:54.354144  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:01:54.372661  687772 logs.go:282] 0 containers: []
	W1223 00:01:54.372686  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:01:54.372700  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:01:54.372715  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:01:54.417565  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:01:54.417601  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:01:54.438013  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:01:54.438041  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:01:54.497552  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:01:54.488781    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:54.489417    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:54.491051    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:54.491532    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:54.493218    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:01:54.488781    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:54.489417    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:54.491051    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:54.491532    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:54.493218    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:01:54.497575  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:01:54.497589  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:01:54.517495  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:01:54.517523  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:01:57.056037  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:57.067478  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:01:57.087084  687772 logs.go:282] 0 containers: []
	W1223 00:01:57.087114  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:01:57.087183  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:01:57.105193  687772 logs.go:282] 0 containers: []
	W1223 00:01:57.105218  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:01:57.105270  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:01:57.122859  687772 logs.go:282] 0 containers: []
	W1223 00:01:57.122885  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:01:57.122931  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:01:57.141996  687772 logs.go:282] 0 containers: []
	W1223 00:01:57.142021  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:01:57.142074  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:01:57.160005  687772 logs.go:282] 0 containers: []
	W1223 00:01:57.160032  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:01:57.160083  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:01:57.178889  687772 logs.go:282] 0 containers: []
	W1223 00:01:57.178915  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:01:57.178989  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:01:57.196419  687772 logs.go:282] 0 containers: []
	W1223 00:01:57.196446  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:01:57.196498  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:01:57.214764  687772 logs.go:282] 0 containers: []
	W1223 00:01:57.214790  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:01:57.214804  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:01:57.214817  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:01:57.266333  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:01:57.266370  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:01:57.289301  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:01:57.289330  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:01:57.347060  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:01:57.339792    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:57.340363    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:57.341983    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:57.342452    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:57.344014    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:01:57.339792    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:57.340363    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:57.341983    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:57.342452    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:57.344014    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:01:57.347099  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:01:57.347117  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:01:57.370222  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:01:57.370257  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:01:58.466074  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:01:58.519779  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:58.519828  687772 retry.go:84] will retry after 42.4s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1223 00:01:58.498177  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:02:00.998033  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:59.898063  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:59.909410  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:01:59.927950  687772 logs.go:282] 0 containers: []
	W1223 00:01:59.927974  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:01:59.928017  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:01:59.946773  687772 logs.go:282] 0 containers: []
	W1223 00:01:59.946800  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:01:59.946861  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:01:59.964419  687772 logs.go:282] 0 containers: []
	W1223 00:01:59.964443  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:01:59.964500  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:01:59.982454  687772 logs.go:282] 0 containers: []
	W1223 00:01:59.982478  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:01:59.982537  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:00.000838  687772 logs.go:282] 0 containers: []
	W1223 00:02:00.000860  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:00.000929  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:00.018673  687772 logs.go:282] 0 containers: []
	W1223 00:02:00.018696  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:00.018747  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:00.035944  687772 logs.go:282] 0 containers: []
	W1223 00:02:00.035973  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:00.036027  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:00.053640  687772 logs.go:282] 0 containers: []
	W1223 00:02:00.053666  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:00.053679  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:00.053700  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:00.098844  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:00.098870  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:00.120198  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:00.120224  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:00.175459  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:00.168568    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:00.169147    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:00.170692    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:00.171078    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:00.172563    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:00.168568    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:00.169147    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:00.170692    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:00.171078    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:00.172563    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:00.175477  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:00.175490  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:00.194106  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:00.194146  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:02.722361  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:02.733721  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:02.753939  687772 logs.go:282] 0 containers: []
	W1223 00:02:02.753963  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:02.754025  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:02.773570  687772 logs.go:282] 0 containers: []
	W1223 00:02:02.773610  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:02.773665  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:02.793427  687772 logs.go:282] 0 containers: []
	W1223 00:02:02.793451  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:02.793514  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:02.812154  687772 logs.go:282] 0 containers: []
	W1223 00:02:02.812183  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:02.812241  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:02.830757  687772 logs.go:282] 0 containers: []
	W1223 00:02:02.830777  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:02.830819  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:02.849140  687772 logs.go:282] 0 containers: []
	W1223 00:02:02.849163  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:02.849206  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:02.867505  687772 logs.go:282] 0 containers: []
	W1223 00:02:02.867529  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:02.867584  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:02.885892  687772 logs.go:282] 0 containers: []
	W1223 00:02:02.885920  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:02.885935  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:02.885950  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:02.933880  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:02.933906  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:02.955273  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:02.955304  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:03.009924  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:03.002806    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:03.003364    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:03.004891    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:03.005360    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:03.006852    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:03.002806    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:03.003364    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:03.004891    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:03.005360    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:03.006852    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:03.009951  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:03.010012  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:03.028798  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:03.028821  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1223 00:02:03.497953  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:02:05.997506  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:02:05.557718  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:05.569769  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:05.588873  687772 logs.go:282] 0 containers: []
	W1223 00:02:05.588899  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:05.588946  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:05.607265  687772 logs.go:282] 0 containers: []
	W1223 00:02:05.607289  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:05.607342  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:05.625761  687772 logs.go:282] 0 containers: []
	W1223 00:02:05.625790  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:05.625860  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:05.643443  687772 logs.go:282] 0 containers: []
	W1223 00:02:05.643464  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:05.643513  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:05.661241  687772 logs.go:282] 0 containers: []
	W1223 00:02:05.661266  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:05.661314  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:05.679744  687772 logs.go:282] 0 containers: []
	W1223 00:02:05.679764  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:05.679805  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:05.697808  687772 logs.go:282] 0 containers: []
	W1223 00:02:05.697831  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:05.697878  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:05.716222  687772 logs.go:282] 0 containers: []
	W1223 00:02:05.716245  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:05.716255  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:05.716269  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:05.772404  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:05.772437  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:05.793610  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:05.793643  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:05.849453  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:05.842499    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:05.842938    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:05.844491    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:05.844916    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:05.846469    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:05.842499    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:05.842938    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:05.844491    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:05.844916    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:05.846469    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:05.849479  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:05.849493  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:05.868250  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:05.868273  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:05.997019  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:02:06.048845  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1223 00:02:06.048961  687772 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1223 00:02:08.398283  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:08.409794  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:08.428854  687772 logs.go:282] 0 containers: []
	W1223 00:02:08.428878  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:08.428927  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:08.447223  687772 logs.go:282] 0 containers: []
	W1223 00:02:08.447248  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:08.447292  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:08.464797  687772 logs.go:282] 0 containers: []
	W1223 00:02:08.464816  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:08.464857  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:08.483364  687772 logs.go:282] 0 containers: []
	W1223 00:02:08.483386  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:08.483449  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:08.501985  687772 logs.go:282] 0 containers: []
	W1223 00:02:08.502014  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:08.502064  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:08.520995  687772 logs.go:282] 0 containers: []
	W1223 00:02:08.521020  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:08.521071  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:08.542089  687772 logs.go:282] 0 containers: []
	W1223 00:02:08.542109  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:08.542149  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:08.560448  687772 logs.go:282] 0 containers: []
	W1223 00:02:08.560468  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:08.560478  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:08.560489  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:08.606044  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:08.606073  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:08.626314  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:08.626339  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:08.681455  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:08.674268    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:08.674839    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:08.676419    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:08.676835    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:08.678358    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:08.674268    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:08.674839    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:08.676419    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:08.676835    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:08.678358    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:08.681477  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:08.681490  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:08.699804  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:08.699830  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1223 00:02:07.998312  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:02:10.498076  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:02:11.230050  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:11.241320  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:11.260234  687772 logs.go:282] 0 containers: []
	W1223 00:02:11.260256  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:11.260301  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:11.279535  687772 logs.go:282] 0 containers: []
	W1223 00:02:11.279558  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:11.279635  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:11.297775  687772 logs.go:282] 0 containers: []
	W1223 00:02:11.297799  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:11.297843  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:11.315756  687772 logs.go:282] 0 containers: []
	W1223 00:02:11.315780  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:11.315823  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:11.333828  687772 logs.go:282] 0 containers: []
	W1223 00:02:11.333850  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:11.333894  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:11.351608  687772 logs.go:282] 0 containers: []
	W1223 00:02:11.351631  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:11.351673  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:11.371003  687772 logs.go:282] 0 containers: []
	W1223 00:02:11.371024  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:11.371065  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:11.388642  687772 logs.go:282] 0 containers: []
	W1223 00:02:11.388662  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:11.388671  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:11.388681  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:11.406664  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:11.406687  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:11.434828  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:11.434851  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:11.481940  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:11.481966  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:11.503051  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:11.503077  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:11.563912  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:11.555767    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:11.556288    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:11.558989    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:11.559407    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:11.560934    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:11.555767    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:11.556288    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:11.558989    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:11.559407    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:11.560934    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:14.064094  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:02:12.997638  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:02:15.497547  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:02:15.997757  622784 node_ready.go:38] duration metric: took 6m0.000870759s for node "no-preload-063943" to be "Ready" ...
	I1223 00:02:15.999489  622784 out.go:203] 
	W1223 00:02:16.002745  622784 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1223 00:02:16.002767  622784 out.go:285] * 
	W1223 00:02:16.002971  622784 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1223 00:02:16.004060  622784 out.go:203] 
	I1223 00:02:14.075388  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:14.094051  687772 logs.go:282] 0 containers: []
	W1223 00:02:14.094075  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:14.094123  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:14.112428  687772 logs.go:282] 0 containers: []
	W1223 00:02:14.112454  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:14.112511  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:14.130910  687772 logs.go:282] 0 containers: []
	W1223 00:02:14.130935  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:14.130991  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:14.149172  687772 logs.go:282] 0 containers: []
	W1223 00:02:14.149194  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:14.149247  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:14.167387  687772 logs.go:282] 0 containers: []
	W1223 00:02:14.167414  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:14.167470  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:14.187009  687772 logs.go:282] 0 containers: []
	W1223 00:02:14.187034  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:14.187080  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:14.205514  687772 logs.go:282] 0 containers: []
	W1223 00:02:14.205537  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:14.205604  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:14.223867  687772 logs.go:282] 0 containers: []
	W1223 00:02:14.223893  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:14.223906  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:14.223919  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:14.278850  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:14.272061    5073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:14.272519    5073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:14.274102    5073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:14.274491    5073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:14.275975    5073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:14.272061    5073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:14.272519    5073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:14.274102    5073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:14.274491    5073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:14.275975    5073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:14.278877  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:14.278904  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:14.297791  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:14.297817  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:14.329010  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:14.329035  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:14.375196  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:14.375228  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:16.895760  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:16.908501  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:16.928330  687772 logs.go:282] 0 containers: []
	W1223 00:02:16.928357  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:16.928403  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:16.947248  687772 logs.go:282] 0 containers: []
	W1223 00:02:16.947272  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:16.947319  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:16.967240  687772 logs.go:282] 0 containers: []
	W1223 00:02:16.967266  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:16.967318  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:16.986942  687772 logs.go:282] 0 containers: []
	W1223 00:02:16.986966  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:16.987025  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:17.008674  687772 logs.go:282] 0 containers: []
	W1223 00:02:17.008702  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:17.008760  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:17.030466  687772 logs.go:282] 0 containers: []
	W1223 00:02:17.030492  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:17.030548  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:17.051687  687772 logs.go:282] 0 containers: []
	W1223 00:02:17.051719  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:17.051773  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:17.073457  687772 logs.go:282] 0 containers: []
	W1223 00:02:17.073486  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:17.073502  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:17.073521  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:17.131973  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:17.132010  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:17.157397  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:17.157433  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:17.217639  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:17.209809    5249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:17.210419    5249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:17.212231    5249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:17.212725    5249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:17.214466    5249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:17.209809    5249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:17.210419    5249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:17.212231    5249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:17.212725    5249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:17.214466    5249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:17.217669  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:17.217683  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:17.239498  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:17.239530  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:19.769550  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:19.782360  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:19.802423  687772 logs.go:282] 0 containers: []
	W1223 00:02:19.802446  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:19.802497  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:19.821183  687772 logs.go:282] 0 containers: []
	W1223 00:02:19.821214  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:19.821269  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:19.840343  687772 logs.go:282] 0 containers: []
	W1223 00:02:19.840369  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:19.840426  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:19.857810  687772 logs.go:282] 0 containers: []
	W1223 00:02:19.857835  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:19.857878  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:19.875458  687772 logs.go:282] 0 containers: []
	W1223 00:02:19.875481  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:19.875523  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:19.893840  687772 logs.go:282] 0 containers: []
	W1223 00:02:19.893864  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:19.893916  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:19.912030  687772 logs.go:282] 0 containers: []
	W1223 00:02:19.912053  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:19.912094  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:19.930049  687772 logs.go:282] 0 containers: []
	W1223 00:02:19.930066  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:19.930077  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:19.930088  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:19.976279  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:19.976304  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:19.995814  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:19.995837  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:20.054797  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:20.046679    5408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:20.047269    5408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:20.048969    5408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:20.049570    5408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:20.051329    5408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:20.046679    5408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:20.047269    5408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:20.048969    5408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:20.049570    5408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:20.051329    5408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:20.054819  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:20.054833  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:20.074562  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:20.074588  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:20.651032  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:02:20.702678  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1223 00:02:20.702795  687772 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1223 00:02:22.602868  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:22.614420  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:22.633871  687772 logs.go:282] 0 containers: []
	W1223 00:02:22.633892  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:22.633942  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:22.652376  687772 logs.go:282] 0 containers: []
	W1223 00:02:22.652403  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:22.652454  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:22.670318  687772 logs.go:282] 0 containers: []
	W1223 00:02:22.670340  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:22.670384  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:22.688893  687772 logs.go:282] 0 containers: []
	W1223 00:02:22.688913  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:22.688966  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:22.707579  687772 logs.go:282] 0 containers: []
	W1223 00:02:22.707614  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:22.707667  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:22.726147  687772 logs.go:282] 0 containers: []
	W1223 00:02:22.726174  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:22.726230  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:22.744895  687772 logs.go:282] 0 containers: []
	W1223 00:02:22.744919  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:22.744975  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:22.765807  687772 logs.go:282] 0 containers: []
	W1223 00:02:22.765834  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:22.765848  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:22.765858  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:22.786075  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:22.786111  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:22.814010  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:22.814034  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:22.859717  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:22.859741  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:22.878865  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:22.878889  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:22.933790  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:22.926530    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:22.927059    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:22.928672    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:22.929122    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:22.930663    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:22.926530    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:22.927059    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:22.928672    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:22.929122    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:22.930663    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:25.434500  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:25.446396  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:25.466157  687772 logs.go:282] 0 containers: []
	W1223 00:02:25.466184  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:25.466237  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:25.484799  687772 logs.go:282] 0 containers: []
	W1223 00:02:25.484827  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:25.484899  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:25.503442  687772 logs.go:282] 0 containers: []
	W1223 00:02:25.503470  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:25.503516  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:25.522088  687772 logs.go:282] 0 containers: []
	W1223 00:02:25.522114  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:25.522174  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:25.540899  687772 logs.go:282] 0 containers: []
	W1223 00:02:25.540924  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:25.540979  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:25.559853  687772 logs.go:282] 0 containers: []
	W1223 00:02:25.559877  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:25.559929  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:25.578537  687772 logs.go:282] 0 containers: []
	W1223 00:02:25.578560  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:25.578619  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:25.597442  687772 logs.go:282] 0 containers: []
	W1223 00:02:25.597465  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:25.597476  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:25.597491  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:25.617688  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:25.617718  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:25.672737  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:25.665784    5752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:25.666239    5752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:25.667786    5752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:25.668269    5752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:25.669751    5752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:25.665784    5752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:25.666239    5752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:25.667786    5752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:25.668269    5752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:25.669751    5752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:25.672761  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:25.672777  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:25.691559  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:25.691585  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:25.719893  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:25.719918  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:28.271777  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:28.284248  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:28.304042  687772 logs.go:282] 0 containers: []
	W1223 00:02:28.304069  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:28.304126  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:28.322682  687772 logs.go:282] 0 containers: []
	W1223 00:02:28.322711  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:28.322769  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:28.340899  687772 logs.go:282] 0 containers: []
	W1223 00:02:28.340925  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:28.340974  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:28.359896  687772 logs.go:282] 0 containers: []
	W1223 00:02:28.359922  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:28.359976  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:28.378627  687772 logs.go:282] 0 containers: []
	W1223 00:02:28.378650  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:28.378700  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:28.396793  687772 logs.go:282] 0 containers: []
	W1223 00:02:28.396821  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:28.396870  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:28.415408  687772 logs.go:282] 0 containers: []
	W1223 00:02:28.415434  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:28.415480  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:28.434108  687772 logs.go:282] 0 containers: []
	W1223 00:02:28.434131  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:28.434142  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:28.434153  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:28.462377  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:28.462405  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:28.509046  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:28.509080  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:28.531034  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:28.531065  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:28.587866  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:28.580703    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:28.581207    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:28.582777    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:28.583174    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:28.584683    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:28.580703    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:28.581207    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:28.582777    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:28.583174    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:28.584683    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:28.587904  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:28.587920  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:31.109730  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:31.121215  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:31.140775  687772 logs.go:282] 0 containers: []
	W1223 00:02:31.140799  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:31.140853  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:31.160694  687772 logs.go:282] 0 containers: []
	W1223 00:02:31.160719  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:31.160766  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:31.180064  687772 logs.go:282] 0 containers: []
	W1223 00:02:31.180087  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:31.180133  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:31.198777  687772 logs.go:282] 0 containers: []
	W1223 00:02:31.198802  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:31.198856  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:31.217848  687772 logs.go:282] 0 containers: []
	W1223 00:02:31.217875  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:31.217923  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:31.237167  687772 logs.go:282] 0 containers: []
	W1223 00:02:31.237196  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:31.237251  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:31.257964  687772 logs.go:282] 0 containers: []
	W1223 00:02:31.257995  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:31.258056  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:31.279556  687772 logs.go:282] 0 containers: []
	W1223 00:02:31.279581  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:31.279607  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:31.279624  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:31.336644  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:31.329447    6086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:31.329953    6086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:31.331523    6086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:31.332012    6086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:31.333520    6086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:31.329447    6086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:31.329953    6086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:31.331523    6086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:31.332012    6086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:31.333520    6086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:31.336664  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:31.336675  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:31.355102  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:31.355129  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:31.384063  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:31.384096  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:31.429299  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:31.429337  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:33.951226  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:33.962558  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:33.981280  687772 logs.go:282] 0 containers: []
	W1223 00:02:33.981301  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:33.981353  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:34.000326  687772 logs.go:282] 0 containers: []
	W1223 00:02:34.000351  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:34.000417  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:34.020043  687772 logs.go:282] 0 containers: []
	W1223 00:02:34.020069  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:34.020114  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:34.042279  687772 logs.go:282] 0 containers: []
	W1223 00:02:34.042304  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:34.042363  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:34.060550  687772 logs.go:282] 0 containers: []
	W1223 00:02:34.060571  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:34.060631  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:34.078917  687772 logs.go:282] 0 containers: []
	W1223 00:02:34.078939  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:34.078986  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:34.098151  687772 logs.go:282] 0 containers: []
	W1223 00:02:34.098177  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:34.098224  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:34.117100  687772 logs.go:282] 0 containers: []
	W1223 00:02:34.117124  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:34.117137  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:34.117153  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:34.138330  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:34.138358  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:34.193562  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:34.186453    6246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:34.187024    6246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:34.188567    6246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:34.188984    6246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:34.190534    6246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:34.186453    6246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:34.187024    6246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:34.188567    6246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:34.188984    6246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:34.190534    6246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:34.193588  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:34.193615  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:34.212264  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:34.212288  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:34.240368  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:34.240399  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:36.793206  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:36.804783  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:36.823535  687772 logs.go:282] 0 containers: []
	W1223 00:02:36.823556  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:36.823618  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:36.841856  687772 logs.go:282] 0 containers: []
	W1223 00:02:36.841879  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:36.841933  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:36.860292  687772 logs.go:282] 0 containers: []
	W1223 00:02:36.860319  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:36.860360  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:36.878691  687772 logs.go:282] 0 containers: []
	W1223 00:02:36.878719  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:36.878773  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:36.897448  687772 logs.go:282] 0 containers: []
	W1223 00:02:36.897472  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:36.897519  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:36.916562  687772 logs.go:282] 0 containers: []
	W1223 00:02:36.916585  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:36.916654  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:36.934784  687772 logs.go:282] 0 containers: []
	W1223 00:02:36.934807  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:36.934865  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:36.953285  687772 logs.go:282] 0 containers: []
	W1223 00:02:36.953305  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:36.953317  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:36.953328  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:37.000978  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:37.001008  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:37.021185  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:37.021217  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:37.081314  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:37.074072    6419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:37.074651    6419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:37.076251    6419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:37.076693    6419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:37.078295    6419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:37.074072    6419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:37.074651    6419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:37.076251    6419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:37.076693    6419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:37.078295    6419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:37.081345  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:37.081366  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:37.100453  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:37.100480  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:39.629693  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:39.641060  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:39.660163  687772 logs.go:282] 0 containers: []
	W1223 00:02:39.660187  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:39.660232  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:39.680357  687772 logs.go:282] 0 containers: []
	W1223 00:02:39.680379  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:39.680422  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:39.699821  687772 logs.go:282] 0 containers: []
	W1223 00:02:39.699853  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:39.699916  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:39.719383  687772 logs.go:282] 0 containers: []
	W1223 00:02:39.719407  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:39.719460  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:39.739699  687772 logs.go:282] 0 containers: []
	W1223 00:02:39.739726  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:39.739800  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:39.758766  687772 logs.go:282] 0 containers: []
	W1223 00:02:39.758791  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:39.758849  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:39.777656  687772 logs.go:282] 0 containers: []
	W1223 00:02:39.777690  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:39.777752  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:39.796962  687772 logs.go:282] 0 containers: []
	W1223 00:02:39.796984  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:39.796995  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:39.797006  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:39.842320  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:39.842347  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:39.862054  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:39.862080  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:39.916930  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:39.910012    6584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:39.910586    6584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:39.912148    6584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:39.912544    6584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:39.913971    6584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:39.910012    6584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:39.910586    6584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:39.912148    6584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:39.912544    6584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:39.913971    6584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:39.916953  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:39.916970  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:39.935277  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:39.935306  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:40.946301  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:02:41.000005  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1223 00:02:41.000109  687772 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1223 00:02:41.001884  687772 out.go:179] * Enabled addons: 
	I1223 00:02:41.002846  687772 addons.go:530] duration metric: took 1m58.614813363s for enable addons: enabled=[]
	I1223 00:02:42.463498  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:42.474861  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:42.493733  687772 logs.go:282] 0 containers: []
	W1223 00:02:42.493756  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:42.493806  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:42.513344  687772 logs.go:282] 0 containers: []
	W1223 00:02:42.513376  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:42.513436  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:42.537617  687772 logs.go:282] 0 containers: []
	W1223 00:02:42.537647  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:42.537701  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:42.557673  687772 logs.go:282] 0 containers: []
	W1223 00:02:42.557698  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:42.557746  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:42.576567  687772 logs.go:282] 0 containers: []
	W1223 00:02:42.576604  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:42.576669  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:42.595813  687772 logs.go:282] 0 containers: []
	W1223 00:02:42.595836  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:42.595890  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:42.615074  687772 logs.go:282] 0 containers: []
	W1223 00:02:42.615101  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:42.615154  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:42.634655  687772 logs.go:282] 0 containers: []
	W1223 00:02:42.634685  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:42.634702  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:42.634719  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:42.654826  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:42.654852  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:42.710552  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:42.703396    6762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:42.703921    6762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:42.705447    6762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:42.705896    6762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:42.707365    6762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:42.703396    6762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:42.703921    6762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:42.705447    6762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:42.705896    6762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:42.707365    6762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:42.710573  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:42.710585  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:42.729412  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:42.729439  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:42.758163  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:42.758187  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:45.306682  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:45.318226  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:45.337265  687772 logs.go:282] 0 containers: []
	W1223 00:02:45.337287  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:45.337343  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:45.355924  687772 logs.go:282] 0 containers: []
	W1223 00:02:45.355945  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:45.355990  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:45.374282  687772 logs.go:282] 0 containers: []
	W1223 00:02:45.374303  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:45.374348  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:45.394500  687772 logs.go:282] 0 containers: []
	W1223 00:02:45.394533  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:45.394584  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:45.412466  687772 logs.go:282] 0 containers: []
	W1223 00:02:45.412489  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:45.412538  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:45.431148  687772 logs.go:282] 0 containers: []
	W1223 00:02:45.431185  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:45.431234  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:45.450281  687772 logs.go:282] 0 containers: []
	W1223 00:02:45.450303  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:45.450352  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:45.468758  687772 logs.go:282] 0 containers: []
	W1223 00:02:45.468787  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:45.468804  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:45.468818  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:45.520708  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:45.520742  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:45.542983  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:45.543013  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:45.598778  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:45.591672    6934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:45.592201    6934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:45.593887    6934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:45.594424    6934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:45.595931    6934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:45.591672    6934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:45.592201    6934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:45.593887    6934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:45.594424    6934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:45.595931    6934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:45.598798  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:45.598812  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:45.617903  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:45.617931  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:48.156370  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:48.167842  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:48.187202  687772 logs.go:282] 0 containers: []
	W1223 00:02:48.187224  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:48.187268  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:48.206448  687772 logs.go:282] 0 containers: []
	W1223 00:02:48.206471  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:48.206516  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:48.225302  687772 logs.go:282] 0 containers: []
	W1223 00:02:48.225322  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:48.225373  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:48.244155  687772 logs.go:282] 0 containers: []
	W1223 00:02:48.244185  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:48.244245  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:48.264312  687772 logs.go:282] 0 containers: []
	W1223 00:02:48.264350  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:48.264418  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:48.284233  687772 logs.go:282] 0 containers: []
	W1223 00:02:48.284260  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:48.284317  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:48.303899  687772 logs.go:282] 0 containers: []
	W1223 00:02:48.303924  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:48.303973  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:48.324302  687772 logs.go:282] 0 containers: []
	W1223 00:02:48.324335  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:48.324350  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:48.324366  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:48.345435  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:48.345463  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:48.402949  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:48.395302    7094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:48.395834    7094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:48.397506    7094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:48.398024    7094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:48.399634    7094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:48.395302    7094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:48.395834    7094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:48.397506    7094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:48.398024    7094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:48.399634    7094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:48.402972  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:48.402984  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:48.423927  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:48.423954  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:48.452771  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:48.452799  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:51.001239  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:51.013175  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:51.032822  687772 logs.go:282] 0 containers: []
	W1223 00:02:51.032846  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:51.032898  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:51.051652  687772 logs.go:282] 0 containers: []
	W1223 00:02:51.051682  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:51.051724  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:51.070373  687772 logs.go:282] 0 containers: []
	W1223 00:02:51.070395  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:51.070448  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:51.088655  687772 logs.go:282] 0 containers: []
	W1223 00:02:51.088676  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:51.088732  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:51.108004  687772 logs.go:282] 0 containers: []
	W1223 00:02:51.108025  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:51.108078  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:51.126636  687772 logs.go:282] 0 containers: []
	W1223 00:02:51.126662  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:51.126728  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:51.145355  687772 logs.go:282] 0 containers: []
	W1223 00:02:51.145385  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:51.145451  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:51.164355  687772 logs.go:282] 0 containers: []
	W1223 00:02:51.164384  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:51.164396  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:51.164409  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:51.191698  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:51.191724  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:51.238383  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:51.238411  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:51.260545  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:51.260580  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:51.318147  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:51.310651    7281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:51.311192    7281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:51.312772    7281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:51.313264    7281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:51.314877    7281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:51.310651    7281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:51.311192    7281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:51.312772    7281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:51.313264    7281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:51.314877    7281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:51.318168  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:51.318182  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:53.838848  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:53.850007  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:53.868584  687772 logs.go:282] 0 containers: []
	W1223 00:02:53.868622  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:53.868663  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:53.887617  687772 logs.go:282] 0 containers: []
	W1223 00:02:53.887640  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:53.887687  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:53.906384  687772 logs.go:282] 0 containers: []
	W1223 00:02:53.906409  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:53.906453  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:53.924912  687772 logs.go:282] 0 containers: []
	W1223 00:02:53.924938  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:53.924988  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:53.943400  687772 logs.go:282] 0 containers: []
	W1223 00:02:53.943425  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:53.943477  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:53.961941  687772 logs.go:282] 0 containers: []
	W1223 00:02:53.961969  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:53.962024  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:53.980915  687772 logs.go:282] 0 containers: []
	W1223 00:02:53.980941  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:53.980987  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:53.998798  687772 logs.go:282] 0 containers: []
	W1223 00:02:53.998817  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:53.998827  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:53.998839  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:54.017064  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:54.017089  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:54.045091  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:54.045114  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:54.090278  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:54.090307  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:54.111890  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:54.111920  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:54.166797  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:54.159777    7452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:54.160366    7452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:54.161942    7452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:54.162343    7452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:54.163852    7452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:54.159777    7452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:54.160366    7452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:54.161942    7452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:54.162343    7452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:54.163852    7452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:56.668571  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:56.680147  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:56.699018  687772 logs.go:282] 0 containers: []
	W1223 00:02:56.699042  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:56.699093  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:56.716996  687772 logs.go:282] 0 containers: []
	W1223 00:02:56.717019  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:56.717068  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:56.735529  687772 logs.go:282] 0 containers: []
	W1223 00:02:56.735565  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:56.735644  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:56.756677  687772 logs.go:282] 0 containers: []
	W1223 00:02:56.756701  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:56.756757  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:56.777819  687772 logs.go:282] 0 containers: []
	W1223 00:02:56.777850  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:56.777905  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:56.799967  687772 logs.go:282] 0 containers: []
	W1223 00:02:56.799997  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:56.800054  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:56.818811  687772 logs.go:282] 0 containers: []
	W1223 00:02:56.818836  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:56.818881  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:56.837426  687772 logs.go:282] 0 containers: []
	W1223 00:02:56.837461  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:56.837473  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:56.837487  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:56.893850  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:56.886682    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:56.887224    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:56.888853    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:56.889345    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:56.890925    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:56.886682    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:56.887224    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:56.888853    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:56.889345    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:56.890925    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:56.893879  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:56.893894  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:56.912125  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:56.912151  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:56.939250  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:56.939279  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:56.986566  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:56.986599  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:59.506330  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:59.518294  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:59.540502  687772 logs.go:282] 0 containers: []
	W1223 00:02:59.540529  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:59.540586  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:59.559288  687772 logs.go:282] 0 containers: []
	W1223 00:02:59.559322  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:59.559372  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:59.577919  687772 logs.go:282] 0 containers: []
	W1223 00:02:59.577945  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:59.578002  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:59.596632  687772 logs.go:282] 0 containers: []
	W1223 00:02:59.596655  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:59.596705  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:59.614750  687772 logs.go:282] 0 containers: []
	W1223 00:02:59.614775  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:59.614826  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:59.632989  687772 logs.go:282] 0 containers: []
	W1223 00:02:59.633007  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:59.633057  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:59.650953  687772 logs.go:282] 0 containers: []
	W1223 00:02:59.650972  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:59.651020  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:59.669171  687772 logs.go:282] 0 containers: []
	W1223 00:02:59.669190  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:59.669202  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:59.669214  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:59.713997  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:59.714026  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:59.733682  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:59.733709  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:59.801000  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:59.792717    7761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:59.793343    7761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:59.796120    7761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:59.796686    7761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:59.797955    7761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:59.792717    7761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:59.793343    7761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:59.796120    7761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:59.796686    7761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:59.797955    7761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:59.801018  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:59.801029  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:59.819988  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:59.820018  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:02.350019  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:02.361484  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:02.380765  687772 logs.go:282] 0 containers: []
	W1223 00:03:02.380793  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:02.380841  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:02.398822  687772 logs.go:282] 0 containers: []
	W1223 00:03:02.398847  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:02.398892  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:02.416468  687772 logs.go:282] 0 containers: []
	W1223 00:03:02.416488  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:02.416530  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:02.435155  687772 logs.go:282] 0 containers: []
	W1223 00:03:02.435182  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:02.435237  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:02.453935  687772 logs.go:282] 0 containers: []
	W1223 00:03:02.453961  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:02.454012  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:02.472347  687772 logs.go:282] 0 containers: []
	W1223 00:03:02.472376  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:02.472445  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:02.490480  687772 logs.go:282] 0 containers: []
	W1223 00:03:02.490505  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:02.490562  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:02.510458  687772 logs.go:282] 0 containers: []
	W1223 00:03:02.510485  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:02.510498  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:02.510509  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:02.541744  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:02.541769  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:02.587578  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:02.587619  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:02.607135  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:02.607161  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:02.663082  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:02.655098    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:02.655704    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:02.657931    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:02.658437    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:02.660015    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:02.655098    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:02.655704    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:02.657931    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:02.658437    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:02.660015    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:02.663104  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:02.663117  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:05.182740  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:05.194033  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:05.212783  687772 logs.go:282] 0 containers: []
	W1223 00:03:05.212809  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:05.212868  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:05.230615  687772 logs.go:282] 0 containers: []
	W1223 00:03:05.230643  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:05.230687  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:05.249068  687772 logs.go:282] 0 containers: []
	W1223 00:03:05.249091  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:05.249140  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:05.268884  687772 logs.go:282] 0 containers: []
	W1223 00:03:05.268913  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:05.268965  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:05.288077  687772 logs.go:282] 0 containers: []
	W1223 00:03:05.288103  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:05.288159  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:05.306886  687772 logs.go:282] 0 containers: []
	W1223 00:03:05.306916  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:05.306970  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:05.325552  687772 logs.go:282] 0 containers: []
	W1223 00:03:05.325579  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:05.325644  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:05.344222  687772 logs.go:282] 0 containers: []
	W1223 00:03:05.344252  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:05.344264  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:05.344276  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:05.389222  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:05.389252  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:05.409357  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:05.409384  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:05.466244  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:05.459201    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:05.459757    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:05.461346    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:05.461781    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:05.463277    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:05.459201    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:05.459757    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:05.461346    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:05.461781    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:05.463277    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:05.466269  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:05.466285  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:05.484803  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:05.484830  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:08.013719  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:08.026534  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:08.046545  687772 logs.go:282] 0 containers: []
	W1223 00:03:08.046567  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:08.046633  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:08.065353  687772 logs.go:282] 0 containers: []
	W1223 00:03:08.065375  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:08.065423  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:08.084081  687772 logs.go:282] 0 containers: []
	W1223 00:03:08.084109  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:08.084156  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:08.102488  687772 logs.go:282] 0 containers: []
	W1223 00:03:08.102514  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:08.102570  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:08.121317  687772 logs.go:282] 0 containers: []
	W1223 00:03:08.121347  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:08.121391  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:08.139209  687772 logs.go:282] 0 containers: []
	W1223 00:03:08.139232  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:08.139282  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:08.157445  687772 logs.go:282] 0 containers: []
	W1223 00:03:08.157465  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:08.157510  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:08.177073  687772 logs.go:282] 0 containers: []
	W1223 00:03:08.177101  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:08.177115  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:08.177131  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:08.195188  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:08.195222  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:08.223256  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:08.223282  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:08.270668  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:08.270696  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:08.290331  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:08.290355  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:08.344801  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:08.337927    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:08.338445    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:08.340039    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:08.340476    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:08.341971    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:08.337927    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:08.338445    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:08.340039    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:08.340476    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:08.341971    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:10.846497  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:10.857798  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:10.876797  687772 logs.go:282] 0 containers: []
	W1223 00:03:10.876818  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:10.876863  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:10.895838  687772 logs.go:282] 0 containers: []
	W1223 00:03:10.895862  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:10.895907  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:10.913971  687772 logs.go:282] 0 containers: []
	W1223 00:03:10.913996  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:10.914038  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:10.932422  687772 logs.go:282] 0 containers: []
	W1223 00:03:10.932449  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:10.932501  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:10.951013  687772 logs.go:282] 0 containers: []
	W1223 00:03:10.951034  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:10.951076  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:10.969170  687772 logs.go:282] 0 containers: []
	W1223 00:03:10.969198  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:10.969242  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:10.988274  687772 logs.go:282] 0 containers: []
	W1223 00:03:10.988332  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:10.988382  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:11.006849  687772 logs.go:282] 0 containers: []
	W1223 00:03:11.006875  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:11.006889  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:11.006906  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:11.059569  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:11.059619  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:11.079808  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:11.079835  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:11.134768  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:11.127709    8435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:11.128312    8435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:11.129883    8435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:11.130324    8435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:11.131860    8435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:11.127709    8435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:11.128312    8435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:11.129883    8435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:11.130324    8435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:11.131860    8435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:11.134794  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:11.134817  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:11.153181  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:11.153207  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:13.681510  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:13.692957  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:13.711987  687772 logs.go:282] 0 containers: []
	W1223 00:03:13.712017  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:13.712069  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:13.730999  687772 logs.go:282] 0 containers: []
	W1223 00:03:13.731026  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:13.731083  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:13.753677  687772 logs.go:282] 0 containers: []
	W1223 00:03:13.753709  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:13.753769  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:13.779299  687772 logs.go:282] 0 containers: []
	W1223 00:03:13.779328  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:13.779389  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:13.800195  687772 logs.go:282] 0 containers: []
	W1223 00:03:13.800223  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:13.800269  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:13.818836  687772 logs.go:282] 0 containers: []
	W1223 00:03:13.818861  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:13.818905  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:13.837265  687772 logs.go:282] 0 containers: []
	W1223 00:03:13.837293  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:13.837349  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:13.855911  687772 logs.go:282] 0 containers: []
	W1223 00:03:13.855934  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:13.855944  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:13.855963  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:13.877413  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:13.877442  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:13.932902  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:13.925709    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:13.926137    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:13.927723    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:13.928122    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:13.929665    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:13.925709    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:13.926137    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:13.927723    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:13.928122    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:13.929665    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:13.932922  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:13.932935  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:13.951430  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:13.951455  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:13.979434  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:13.979463  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:16.528395  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:16.539658  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:16.558721  687772 logs.go:282] 0 containers: []
	W1223 00:03:16.558746  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:16.558802  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:16.577097  687772 logs.go:282] 0 containers: []
	W1223 00:03:16.577122  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:16.577169  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:16.594944  687772 logs.go:282] 0 containers: []
	W1223 00:03:16.594973  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:16.595021  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:16.612956  687772 logs.go:282] 0 containers: []
	W1223 00:03:16.612982  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:16.613028  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:16.631601  687772 logs.go:282] 0 containers: []
	W1223 00:03:16.631626  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:16.631689  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:16.650054  687772 logs.go:282] 0 containers: []
	W1223 00:03:16.650077  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:16.650125  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:16.668847  687772 logs.go:282] 0 containers: []
	W1223 00:03:16.668868  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:16.668912  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:16.686862  687772 logs.go:282] 0 containers: []
	W1223 00:03:16.686892  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:16.686906  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:16.686923  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:16.743145  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:16.736059    8755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:16.736637    8755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:16.738192    8755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:16.738624    8755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:16.740098    8755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:16.736059    8755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:16.736637    8755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:16.738192    8755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:16.738624    8755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:16.740098    8755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:16.743166  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:16.743178  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:16.762565  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:16.762607  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:16.794528  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:16.794556  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:16.840343  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:16.840372  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:19.362509  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:19.374211  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:19.393192  687772 logs.go:282] 0 containers: []
	W1223 00:03:19.393216  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:19.393268  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:19.412437  687772 logs.go:282] 0 containers: []
	W1223 00:03:19.412465  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:19.412523  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:19.432373  687772 logs.go:282] 0 containers: []
	W1223 00:03:19.432401  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:19.432460  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:19.452125  687772 logs.go:282] 0 containers: []
	W1223 00:03:19.452159  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:19.452217  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:19.471301  687772 logs.go:282] 0 containers: []
	W1223 00:03:19.471328  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:19.471374  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:19.490544  687772 logs.go:282] 0 containers: []
	W1223 00:03:19.490571  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:19.490643  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:19.510487  687772 logs.go:282] 0 containers: []
	W1223 00:03:19.510508  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:19.510559  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:19.529060  687772 logs.go:282] 0 containers: []
	W1223 00:03:19.529084  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:19.529097  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:19.529112  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:19.574443  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:19.574473  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:19.594488  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:19.594517  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:19.649890  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:19.642446    8930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:19.643046    8930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:19.644619    8930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:19.645108    8930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:19.646648    8930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:19.642446    8930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:19.643046    8930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:19.644619    8930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:19.645108    8930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:19.646648    8930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:19.649910  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:19.649923  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:19.668626  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:19.668651  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:22.198480  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:22.210881  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:22.230438  687772 logs.go:282] 0 containers: []
	W1223 00:03:22.230462  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:22.230522  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:22.248861  687772 logs.go:282] 0 containers: []
	W1223 00:03:22.248882  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:22.248922  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:22.268466  687772 logs.go:282] 0 containers: []
	W1223 00:03:22.268499  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:22.268557  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:22.289199  687772 logs.go:282] 0 containers: []
	W1223 00:03:22.289223  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:22.289268  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:22.307380  687772 logs.go:282] 0 containers: []
	W1223 00:03:22.307405  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:22.307470  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:22.324678  687772 logs.go:282] 0 containers: []
	W1223 00:03:22.324704  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:22.324763  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:22.343704  687772 logs.go:282] 0 containers: []
	W1223 00:03:22.343736  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:22.343791  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:22.362087  687772 logs.go:282] 0 containers: []
	W1223 00:03:22.362117  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:22.362137  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:22.362150  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:22.409818  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:22.409877  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:22.430134  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:22.430165  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:22.485643  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:22.478699    9095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:22.479219    9095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:22.480800    9095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:22.481236    9095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:22.482717    9095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:22.478699    9095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:22.479219    9095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:22.480800    9095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:22.481236    9095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:22.482717    9095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:22.485664  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:22.485680  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:22.504121  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:22.504150  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:25.031881  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:25.043513  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:25.063145  687772 logs.go:282] 0 containers: []
	W1223 00:03:25.063167  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:25.063211  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:25.082000  687772 logs.go:282] 0 containers: []
	W1223 00:03:25.082025  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:25.082074  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:25.099962  687772 logs.go:282] 0 containers: []
	W1223 00:03:25.099984  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:25.100038  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:25.118454  687772 logs.go:282] 0 containers: []
	W1223 00:03:25.118479  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:25.118537  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:25.136993  687772 logs.go:282] 0 containers: []
	W1223 00:03:25.137020  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:25.137069  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:25.155902  687772 logs.go:282] 0 containers: []
	W1223 00:03:25.155925  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:25.155974  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:25.175659  687772 logs.go:282] 0 containers: []
	W1223 00:03:25.175683  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:25.175737  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:25.194139  687772 logs.go:282] 0 containers: []
	W1223 00:03:25.194167  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:25.194180  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:25.194193  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:25.240226  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:25.240258  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:25.261339  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:25.261367  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:25.320736  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:25.313498    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:25.314072    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:25.315614    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:25.316005    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:25.317501    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:25.313498    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:25.314072    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:25.315614    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:25.316005    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:25.317501    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:25.320756  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:25.320768  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:25.341035  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:25.341064  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:27.870845  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:27.882071  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:27.901298  687772 logs.go:282] 0 containers: []
	W1223 00:03:27.901323  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:27.901382  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:27.919859  687772 logs.go:282] 0 containers: []
	W1223 00:03:27.919880  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:27.919930  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:27.938496  687772 logs.go:282] 0 containers: []
	W1223 00:03:27.938520  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:27.938563  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:27.956888  687772 logs.go:282] 0 containers: []
	W1223 00:03:27.956916  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:27.956972  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:27.975342  687772 logs.go:282] 0 containers: []
	W1223 00:03:27.975362  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:27.975412  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:27.994015  687772 logs.go:282] 0 containers: []
	W1223 00:03:27.994038  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:27.994082  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:28.013037  687772 logs.go:282] 0 containers: []
	W1223 00:03:28.013065  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:28.013125  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:28.033210  687772 logs.go:282] 0 containers: []
	W1223 00:03:28.033234  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:28.033247  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:28.033262  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:28.078861  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:28.078892  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:28.098865  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:28.098890  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:28.154165  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:28.147100    9422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:28.147650    9422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:28.149204    9422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:28.149685    9422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:28.151156    9422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:28.147100    9422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:28.147650    9422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:28.149204    9422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:28.149685    9422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:28.151156    9422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:28.154185  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:28.154197  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:28.172425  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:28.172454  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:30.702937  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:30.714537  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:30.735323  687772 logs.go:282] 0 containers: []
	W1223 00:03:30.735346  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:30.735411  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:30.754342  687772 logs.go:282] 0 containers: []
	W1223 00:03:30.754364  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:30.754416  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:30.773486  687772 logs.go:282] 0 containers: []
	W1223 00:03:30.773513  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:30.773570  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:30.792473  687772 logs.go:282] 0 containers: []
	W1223 00:03:30.792498  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:30.792554  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:30.810955  687772 logs.go:282] 0 containers: []
	W1223 00:03:30.810981  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:30.811028  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:30.829795  687772 logs.go:282] 0 containers: []
	W1223 00:03:30.829816  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:30.829864  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:30.848939  687772 logs.go:282] 0 containers: []
	W1223 00:03:30.848959  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:30.849000  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:30.867397  687772 logs.go:282] 0 containers: []
	W1223 00:03:30.867423  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:30.867435  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:30.867452  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:30.887088  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:30.887116  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:30.942084  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:30.934885    9589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:30.935453    9589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:30.937022    9589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:30.937466    9589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:30.938978    9589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:30.934885    9589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:30.935453    9589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:30.937022    9589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:30.937466    9589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:30.938978    9589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:30.942116  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:30.942130  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:30.960703  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:30.960730  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:30.988334  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:30.988359  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:33.539710  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:33.551147  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:33.569876  687772 logs.go:282] 0 containers: []
	W1223 00:03:33.569899  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:33.569943  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:33.588678  687772 logs.go:282] 0 containers: []
	W1223 00:03:33.588710  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:33.588766  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:33.607229  687772 logs.go:282] 0 containers: []
	W1223 00:03:33.607251  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:33.607302  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:33.625442  687772 logs.go:282] 0 containers: []
	W1223 00:03:33.625466  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:33.625527  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:33.644308  687772 logs.go:282] 0 containers: []
	W1223 00:03:33.644340  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:33.644396  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:33.662684  687772 logs.go:282] 0 containers: []
	W1223 00:03:33.662717  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:33.662786  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:33.681135  687772 logs.go:282] 0 containers: []
	W1223 00:03:33.681161  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:33.681209  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:33.700016  687772 logs.go:282] 0 containers: []
	W1223 00:03:33.700042  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:33.700057  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:33.700070  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:33.718957  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:33.718985  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:33.747390  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:33.747417  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:33.793693  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:33.793722  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:33.815051  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:33.815076  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:33.869709  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:33.862833    9773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:33.863339    9773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:33.864945    9773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:33.865374    9773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:33.866900    9773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:33.862833    9773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:33.863339    9773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:33.864945    9773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:33.865374    9773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:33.866900    9773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:36.371365  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:36.383229  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:36.403744  687772 logs.go:282] 0 containers: []
	W1223 00:03:36.403771  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:36.403818  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:36.422087  687772 logs.go:282] 0 containers: []
	W1223 00:03:36.422109  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:36.422163  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:36.440967  687772 logs.go:282] 0 containers: []
	W1223 00:03:36.440989  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:36.441046  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:36.459110  687772 logs.go:282] 0 containers: []
	W1223 00:03:36.459137  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:36.459184  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:36.477754  687772 logs.go:282] 0 containers: []
	W1223 00:03:36.477781  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:36.477838  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:36.496775  687772 logs.go:282] 0 containers: []
	W1223 00:03:36.496803  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:36.496857  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:36.516542  687772 logs.go:282] 0 containers: []
	W1223 00:03:36.516577  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:36.516652  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:36.537692  687772 logs.go:282] 0 containers: []
	W1223 00:03:36.537720  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:36.537731  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:36.537744  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:36.585346  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:36.585376  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:36.605519  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:36.605545  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:36.660230  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:36.653151    9925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:36.653663    9925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:36.655147    9925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:36.655634    9925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:36.657120    9925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:36.653151    9925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:36.653663    9925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:36.655147    9925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:36.655634    9925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:36.657120    9925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:36.660253  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:36.660269  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:36.678368  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:36.678395  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:39.206672  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:39.218123  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:39.236299  687772 logs.go:282] 0 containers: []
	W1223 00:03:39.236322  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:39.236384  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:39.256168  687772 logs.go:282] 0 containers: []
	W1223 00:03:39.256194  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:39.256256  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:39.278907  687772 logs.go:282] 0 containers: []
	W1223 00:03:39.278934  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:39.278987  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:39.299685  687772 logs.go:282] 0 containers: []
	W1223 00:03:39.299712  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:39.299771  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:39.319824  687772 logs.go:282] 0 containers: []
	W1223 00:03:39.319847  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:39.319890  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:39.339314  687772 logs.go:282] 0 containers: []
	W1223 00:03:39.339340  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:39.339388  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:39.357097  687772 logs.go:282] 0 containers: []
	W1223 00:03:39.357122  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:39.357178  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:39.375484  687772 logs.go:282] 0 containers: []
	W1223 00:03:39.375506  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:39.375518  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:39.375528  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:39.422143  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:39.422171  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:39.442163  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:39.442190  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:39.499251  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:39.491681   10081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:39.492132   10081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:39.493822   10081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:39.494268   10081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:39.495815   10081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:39.491681   10081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:39.492132   10081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:39.493822   10081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:39.494268   10081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:39.495815   10081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:39.499300  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:39.499313  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:39.520555  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:39.520585  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:42.050334  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:42.062329  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:42.081392  687772 logs.go:282] 0 containers: []
	W1223 00:03:42.081414  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:42.081466  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:42.100032  687772 logs.go:282] 0 containers: []
	W1223 00:03:42.100060  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:42.100108  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:42.118667  687772 logs.go:282] 0 containers: []
	W1223 00:03:42.118701  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:42.118755  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:42.137260  687772 logs.go:282] 0 containers: []
	W1223 00:03:42.137280  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:42.137324  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:42.156202  687772 logs.go:282] 0 containers: []
	W1223 00:03:42.156223  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:42.156268  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:42.173781  687772 logs.go:282] 0 containers: []
	W1223 00:03:42.173805  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:42.173849  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:42.191802  687772 logs.go:282] 0 containers: []
	W1223 00:03:42.191823  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:42.191865  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:42.210403  687772 logs.go:282] 0 containers: []
	W1223 00:03:42.210428  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:42.210439  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:42.210451  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:42.257288  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:42.257324  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:42.279921  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:42.279950  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:42.335965  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:42.328933   10249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:42.329474   10249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:42.331040   10249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:42.331482   10249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:42.333076   10249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:42.328933   10249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:42.329474   10249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:42.331040   10249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:42.331482   10249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:42.333076   10249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:42.335989  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:42.336007  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:42.354691  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:42.354717  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:44.883238  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:44.894443  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:44.913117  687772 logs.go:282] 0 containers: []
	W1223 00:03:44.913141  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:44.913198  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:44.931401  687772 logs.go:282] 0 containers: []
	W1223 00:03:44.931426  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:44.931481  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:44.950195  687772 logs.go:282] 0 containers: []
	W1223 00:03:44.950223  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:44.950276  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:44.968485  687772 logs.go:282] 0 containers: []
	W1223 00:03:44.968511  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:44.968566  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:44.987148  687772 logs.go:282] 0 containers: []
	W1223 00:03:44.987171  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:44.987233  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:45.005624  687772 logs.go:282] 0 containers: []
	W1223 00:03:45.005646  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:45.005693  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:45.023699  687772 logs.go:282] 0 containers: []
	W1223 00:03:45.023724  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:45.023791  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:45.042874  687772 logs.go:282] 0 containers: []
	W1223 00:03:45.042892  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:45.042903  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:45.042913  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:45.091063  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:45.091090  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:45.111078  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:45.111104  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:45.165637  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:45.158773   10418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:45.159306   10418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:45.160855   10418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:45.161311   10418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:45.162829   10418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:45.158773   10418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:45.159306   10418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:45.160855   10418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:45.161311   10418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:45.162829   10418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:45.165664  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:45.165680  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:45.183805  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:45.183831  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:47.712691  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:47.724393  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:47.743118  687772 logs.go:282] 0 containers: []
	W1223 00:03:47.743145  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:47.743192  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:47.764020  687772 logs.go:282] 0 containers: []
	W1223 00:03:47.764047  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:47.764100  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:47.784950  687772 logs.go:282] 0 containers: []
	W1223 00:03:47.784979  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:47.785031  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:47.805130  687772 logs.go:282] 0 containers: []
	W1223 00:03:47.805153  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:47.805202  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:47.824818  687772 logs.go:282] 0 containers: []
	W1223 00:03:47.824840  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:47.824881  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:47.842122  687772 logs.go:282] 0 containers: []
	W1223 00:03:47.842142  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:47.842182  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:47.860107  687772 logs.go:282] 0 containers: []
	W1223 00:03:47.860126  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:47.860169  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:47.877957  687772 logs.go:282] 0 containers: []
	W1223 00:03:47.877981  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:47.877991  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:47.878003  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:47.913554  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:47.913583  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:47.959272  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:47.959301  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:47.979197  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:47.979224  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:48.034846  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:48.027499   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:48.028033   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:48.029566   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:48.030004   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:48.031523   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:48.027499   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:48.028033   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:48.029566   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:48.030004   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:48.031523   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:48.034864  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:48.034876  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:50.554653  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:50.565766  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:50.584506  687772 logs.go:282] 0 containers: []
	W1223 00:03:50.584527  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:50.584568  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:50.603087  687772 logs.go:282] 0 containers: []
	W1223 00:03:50.603112  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:50.603159  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:50.621694  687772 logs.go:282] 0 containers: []
	W1223 00:03:50.621718  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:50.621758  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:50.640855  687772 logs.go:282] 0 containers: []
	W1223 00:03:50.640882  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:50.640950  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:50.658573  687772 logs.go:282] 0 containers: []
	W1223 00:03:50.658615  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:50.658659  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:50.676703  687772 logs.go:282] 0 containers: []
	W1223 00:03:50.676725  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:50.676792  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:50.694997  687772 logs.go:282] 0 containers: []
	W1223 00:03:50.695020  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:50.695084  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:50.711361  687772 logs.go:282] 0 containers: []
	W1223 00:03:50.711382  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:50.711393  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:50.711405  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:50.739475  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:50.739500  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:50.789788  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:50.789828  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:50.810067  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:50.810096  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:50.864855  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:50.857771   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:50.858239   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:50.859923   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:50.860349   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:50.861907   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:50.857771   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:50.858239   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:50.859923   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:50.860349   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:50.861907   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:50.864881  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:50.864896  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:53.383457  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:53.394757  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:53.414248  687772 logs.go:282] 0 containers: []
	W1223 00:03:53.414277  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:53.414341  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:53.432950  687772 logs.go:282] 0 containers: []
	W1223 00:03:53.432970  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:53.433020  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:53.452058  687772 logs.go:282] 0 containers: []
	W1223 00:03:53.452081  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:53.452143  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:53.470670  687772 logs.go:282] 0 containers: []
	W1223 00:03:53.470698  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:53.470751  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:53.489416  687772 logs.go:282] 0 containers: []
	W1223 00:03:53.489443  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:53.489486  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:53.508963  687772 logs.go:282] 0 containers: []
	W1223 00:03:53.508995  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:53.509057  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:53.530683  687772 logs.go:282] 0 containers: []
	W1223 00:03:53.530710  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:53.530770  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:53.551545  687772 logs.go:282] 0 containers: []
	W1223 00:03:53.551577  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:53.551610  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:53.551627  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:53.570296  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:53.570324  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:53.598123  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:53.598154  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:53.646248  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:53.646280  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:53.666819  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:53.666844  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:53.722068  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:53.715109   10928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:53.715646   10928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:53.717149   10928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:53.717536   10928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:53.719006   10928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:53.715109   10928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:53.715646   10928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:53.717149   10928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:53.717536   10928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:53.719006   10928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:56.223706  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:56.235187  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:56.255491  687772 logs.go:282] 0 containers: []
	W1223 00:03:56.255511  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:56.255551  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:56.274455  687772 logs.go:282] 0 containers: []
	W1223 00:03:56.274479  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:56.274519  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:56.293621  687772 logs.go:282] 0 containers: []
	W1223 00:03:56.293648  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:56.293702  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:56.312485  687772 logs.go:282] 0 containers: []
	W1223 00:03:56.312511  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:56.312558  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:56.331239  687772 logs.go:282] 0 containers: []
	W1223 00:03:56.331266  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:56.331320  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:56.349793  687772 logs.go:282] 0 containers: []
	W1223 00:03:56.349813  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:56.349856  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:56.368378  687772 logs.go:282] 0 containers: []
	W1223 00:03:56.368397  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:56.368446  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:56.386706  687772 logs.go:282] 0 containers: []
	W1223 00:03:56.386730  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:56.386744  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:56.386759  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:56.435036  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:56.435067  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:56.456766  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:56.456793  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:56.515022  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:56.506534   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:56.507203   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:56.508885   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:56.509323   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:56.510920   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:56.506534   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:56.507203   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:56.508885   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:56.509323   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:56.510920   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:56.515044  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:56.515056  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:56.537382  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:56.537424  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:59.067413  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:59.078926  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:59.098458  687772 logs.go:282] 0 containers: []
	W1223 00:03:59.098490  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:59.098543  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:59.119074  687772 logs.go:282] 0 containers: []
	W1223 00:03:59.119100  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:59.119146  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:59.138014  687772 logs.go:282] 0 containers: []
	W1223 00:03:59.138036  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:59.138082  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:59.157367  687772 logs.go:282] 0 containers: []
	W1223 00:03:59.157390  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:59.157433  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:59.175923  687772 logs.go:282] 0 containers: []
	W1223 00:03:59.175950  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:59.176008  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:59.194211  687772 logs.go:282] 0 containers: []
	W1223 00:03:59.194243  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:59.194295  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:59.212980  687772 logs.go:282] 0 containers: []
	W1223 00:03:59.213004  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:59.213050  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:59.231233  687772 logs.go:282] 0 containers: []
	W1223 00:03:59.231255  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:59.231266  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:59.231277  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:59.260354  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:59.260377  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:59.307751  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:59.307784  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:59.327756  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:59.327782  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:59.382873  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:59.375811   11264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:59.376331   11264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:59.377895   11264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:59.378317   11264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:59.379902   11264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:59.375811   11264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:59.376331   11264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:59.377895   11264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:59.378317   11264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:59.379902   11264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:59.382895  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:59.382908  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:01.903304  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:01.914514  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:01.933300  687772 logs.go:282] 0 containers: []
	W1223 00:04:01.933328  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:01.933388  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:01.952153  687772 logs.go:282] 0 containers: []
	W1223 00:04:01.952181  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:01.952225  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:01.970903  687772 logs.go:282] 0 containers: []
	W1223 00:04:01.970933  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:01.970987  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:01.989493  687772 logs.go:282] 0 containers: []
	W1223 00:04:01.989513  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:01.989567  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:02.009114  687772 logs.go:282] 0 containers: []
	W1223 00:04:02.009141  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:02.009198  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:02.030277  687772 logs.go:282] 0 containers: []
	W1223 00:04:02.030310  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:02.030365  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:02.050466  687772 logs.go:282] 0 containers: []
	W1223 00:04:02.050492  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:02.050551  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:02.069917  687772 logs.go:282] 0 containers: []
	W1223 00:04:02.069941  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:02.069956  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:02.069970  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:02.115721  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:02.115750  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:02.135348  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:02.135373  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:02.190691  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:02.183688   11415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:02.184205   11415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:02.185799   11415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:02.186209   11415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:02.187682   11415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:02.183688   11415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:02.184205   11415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:02.185799   11415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:02.186209   11415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:02.187682   11415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:02.190712  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:02.190724  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:02.209097  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:02.209122  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:04.737357  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:04.748553  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:04.770341  687772 logs.go:282] 0 containers: []
	W1223 00:04:04.770369  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:04.770424  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:04.791137  687772 logs.go:282] 0 containers: []
	W1223 00:04:04.791165  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:04.791214  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:04.810520  687772 logs.go:282] 0 containers: []
	W1223 00:04:04.810541  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:04.810607  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:04.828972  687772 logs.go:282] 0 containers: []
	W1223 00:04:04.829000  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:04.829055  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:04.849074  687772 logs.go:282] 0 containers: []
	W1223 00:04:04.849096  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:04.849148  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:04.868041  687772 logs.go:282] 0 containers: []
	W1223 00:04:04.868063  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:04.868115  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:04.886481  687772 logs.go:282] 0 containers: []
	W1223 00:04:04.886504  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:04.886567  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:04.905235  687772 logs.go:282] 0 containers: []
	W1223 00:04:04.905262  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:04.905274  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:04.905285  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:04.953851  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:04.953880  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:04.973781  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:04.973806  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:05.031345  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:05.024020   11570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:05.024585   11570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:05.026291   11570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:05.026768   11570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:05.028137   11570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:05.024020   11570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:05.024585   11570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:05.026291   11570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:05.026768   11570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:05.028137   11570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:05.031368  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:05.031383  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:05.050812  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:05.050839  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:07.580204  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:07.592091  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:07.611238  687772 logs.go:282] 0 containers: []
	W1223 00:04:07.611267  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:07.611318  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:07.630713  687772 logs.go:282] 0 containers: []
	W1223 00:04:07.630736  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:07.630786  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:07.649511  687772 logs.go:282] 0 containers: []
	W1223 00:04:07.649541  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:07.649620  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:07.668236  687772 logs.go:282] 0 containers: []
	W1223 00:04:07.668264  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:07.668323  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:07.687077  687772 logs.go:282] 0 containers: []
	W1223 00:04:07.687101  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:07.687158  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:07.705952  687772 logs.go:282] 0 containers: []
	W1223 00:04:07.705982  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:07.706036  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:07.725156  687772 logs.go:282] 0 containers: []
	W1223 00:04:07.725178  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:07.725224  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:07.744024  687772 logs.go:282] 0 containers: []
	W1223 00:04:07.744049  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:07.744063  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:07.744079  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:07.797680  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:07.797721  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:07.819453  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:07.819481  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:07.875026  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:07.867909   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:07.868453   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:07.870037   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:07.870465   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:07.872022   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:07.867909   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:07.868453   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:07.870037   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:07.870465   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:07.872022   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:07.875046  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:07.875059  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:07.893942  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:07.893968  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:10.422234  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:10.433749  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:10.453027  687772 logs.go:282] 0 containers: []
	W1223 00:04:10.453049  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:10.453099  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:10.471766  687772 logs.go:282] 0 containers: []
	W1223 00:04:10.471789  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:10.471840  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:10.489960  687772 logs.go:282] 0 containers: []
	W1223 00:04:10.489981  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:10.490025  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:10.508537  687772 logs.go:282] 0 containers: []
	W1223 00:04:10.508558  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:10.508614  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:10.527336  687772 logs.go:282] 0 containers: []
	W1223 00:04:10.527362  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:10.527418  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:10.545995  687772 logs.go:282] 0 containers: []
	W1223 00:04:10.546019  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:10.546074  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:10.564167  687772 logs.go:282] 0 containers: []
	W1223 00:04:10.564196  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:10.564254  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:10.582919  687772 logs.go:282] 0 containers: []
	W1223 00:04:10.582947  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:10.582961  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:10.582974  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:10.630969  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:10.631004  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:10.651161  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:10.651197  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:10.709000  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:10.701750   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:10.702399   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:10.703967   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:10.704377   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:10.705928   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:10.701750   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:10.702399   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:10.703967   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:10.704377   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:10.705928   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:10.709026  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:10.709041  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:10.728175  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:10.728203  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:13.258812  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:13.271437  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:13.293437  687772 logs.go:282] 0 containers: []
	W1223 00:04:13.293468  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:13.293525  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:13.313483  687772 logs.go:282] 0 containers: []
	W1223 00:04:13.313508  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:13.313568  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:13.333612  687772 logs.go:282] 0 containers: []
	W1223 00:04:13.333643  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:13.333709  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:13.353086  687772 logs.go:282] 0 containers: []
	W1223 00:04:13.353111  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:13.353169  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:13.372208  687772 logs.go:282] 0 containers: []
	W1223 00:04:13.372230  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:13.372275  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:13.391431  687772 logs.go:282] 0 containers: []
	W1223 00:04:13.391457  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:13.391507  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:13.410402  687772 logs.go:282] 0 containers: []
	W1223 00:04:13.410434  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:13.410502  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:13.428653  687772 logs.go:282] 0 containers: []
	W1223 00:04:13.428675  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:13.428687  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:13.428709  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:13.474690  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:13.474729  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:13.495426  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:13.495457  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:13.550790  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:13.543422   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:13.544009   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:13.545544   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:13.546130   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:13.547692   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:13.543422   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:13.544009   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:13.545544   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:13.546130   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:13.547692   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:13.550810  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:13.550822  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:13.569370  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:13.569397  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:16.099133  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:16.110484  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:16.129712  687772 logs.go:282] 0 containers: []
	W1223 00:04:16.129743  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:16.129808  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:16.147785  687772 logs.go:282] 0 containers: []
	W1223 00:04:16.147808  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:16.147854  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:16.167259  687772 logs.go:282] 0 containers: []
	W1223 00:04:16.167284  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:16.167333  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:16.186151  687772 logs.go:282] 0 containers: []
	W1223 00:04:16.186178  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:16.186223  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:16.206074  687772 logs.go:282] 0 containers: []
	W1223 00:04:16.206099  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:16.206154  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:16.225296  687772 logs.go:282] 0 containers: []
	W1223 00:04:16.225319  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:16.225369  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:16.244091  687772 logs.go:282] 0 containers: []
	W1223 00:04:16.244115  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:16.244160  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:16.263620  687772 logs.go:282] 0 containers: []
	W1223 00:04:16.263643  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:16.263655  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:16.263667  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:16.323241  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:16.316239   12235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:16.316726   12235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:16.318256   12235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:16.318676   12235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:16.319891   12235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:16.316239   12235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:16.316726   12235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:16.318256   12235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:16.318676   12235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:16.319891   12235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:16.323265  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:16.323281  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:16.342320  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:16.342346  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:16.371156  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:16.371183  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:16.421158  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:16.421188  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:18.942795  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:18.954257  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:18.974190  687772 logs.go:282] 0 containers: []
	W1223 00:04:18.974217  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:18.974270  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:18.993178  687772 logs.go:282] 0 containers: []
	W1223 00:04:18.993200  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:18.993245  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:19.013377  687772 logs.go:282] 0 containers: []
	W1223 00:04:19.013405  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:19.013465  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:19.034917  687772 logs.go:282] 0 containers: []
	W1223 00:04:19.034941  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:19.034990  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:19.054247  687772 logs.go:282] 0 containers: []
	W1223 00:04:19.054271  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:19.054326  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:19.072206  687772 logs.go:282] 0 containers: []
	W1223 00:04:19.072235  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:19.072297  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:19.091855  687772 logs.go:282] 0 containers: []
	W1223 00:04:19.091882  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:19.091933  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:19.111067  687772 logs.go:282] 0 containers: []
	W1223 00:04:19.111100  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:19.111114  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:19.111127  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:19.161923  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:19.161955  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:19.182679  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:19.182708  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:19.239475  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:19.232458   12393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:19.233037   12393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:19.234582   12393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:19.234997   12393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:19.236569   12393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:19.232458   12393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:19.233037   12393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:19.234582   12393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:19.234997   12393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:19.236569   12393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:19.239503  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:19.239521  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:19.259046  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:19.259075  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:21.799246  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:21.810742  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:21.830826  687772 logs.go:282] 0 containers: []
	W1223 00:04:21.830852  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:21.830896  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:21.849427  687772 logs.go:282] 0 containers: []
	W1223 00:04:21.849455  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:21.849501  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:21.867823  687772 logs.go:282] 0 containers: []
	W1223 00:04:21.867847  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:21.867891  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:21.886431  687772 logs.go:282] 0 containers: []
	W1223 00:04:21.886452  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:21.886508  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:21.905079  687772 logs.go:282] 0 containers: []
	W1223 00:04:21.905103  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:21.905160  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:21.923344  687772 logs.go:282] 0 containers: []
	W1223 00:04:21.923365  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:21.923407  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:21.941945  687772 logs.go:282] 0 containers: []
	W1223 00:04:21.941966  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:21.942012  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:21.959749  687772 logs.go:282] 0 containers: []
	W1223 00:04:21.959773  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:21.959785  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:21.959795  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:21.979750  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:21.979776  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:22.008278  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:22.008301  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:22.059988  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:22.060022  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:22.080174  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:22.080201  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:22.135625  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:22.128550   12578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:22.129064   12578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:22.130551   12578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:22.130965   12578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:22.132436   12578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:22.128550   12578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:22.129064   12578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:22.130551   12578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:22.130965   12578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:22.132436   12578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:24.636526  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:24.647769  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:24.666800  687772 logs.go:282] 0 containers: []
	W1223 00:04:24.666823  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:24.666873  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:24.685078  687772 logs.go:282] 0 containers: []
	W1223 00:04:24.685100  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:24.685153  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:24.703219  687772 logs.go:282] 0 containers: []
	W1223 00:04:24.703238  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:24.703287  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:24.721619  687772 logs.go:282] 0 containers: []
	W1223 00:04:24.721647  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:24.721705  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:24.740548  687772 logs.go:282] 0 containers: []
	W1223 00:04:24.740570  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:24.740632  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:24.758544  687772 logs.go:282] 0 containers: []
	W1223 00:04:24.758568  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:24.758633  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:24.776285  687772 logs.go:282] 0 containers: []
	W1223 00:04:24.776317  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:24.776445  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:24.794360  687772 logs.go:282] 0 containers: []
	W1223 00:04:24.794386  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:24.794399  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:24.794413  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:24.840111  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:24.840142  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:24.860260  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:24.860286  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:24.915702  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:24.908230   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:24.908821   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:24.910346   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:24.910801   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:24.912322   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:24.908230   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:24.908821   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:24.910346   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:24.910801   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:24.912322   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:24.915723  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:24.915736  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:24.934368  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:24.934394  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:27.463653  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:27.474997  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:27.494098  687772 logs.go:282] 0 containers: []
	W1223 00:04:27.494127  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:27.494183  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:27.513771  687772 logs.go:282] 0 containers: []
	W1223 00:04:27.513799  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:27.513855  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:27.534688  687772 logs.go:282] 0 containers: []
	W1223 00:04:27.534720  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:27.534777  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:27.553043  687772 logs.go:282] 0 containers: []
	W1223 00:04:27.553065  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:27.553115  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:27.571979  687772 logs.go:282] 0 containers: []
	W1223 00:04:27.572005  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:27.572049  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:27.590357  687772 logs.go:282] 0 containers: []
	W1223 00:04:27.590376  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:27.590419  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:27.609465  687772 logs.go:282] 0 containers: []
	W1223 00:04:27.609490  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:27.609547  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:27.628214  687772 logs.go:282] 0 containers: []
	W1223 00:04:27.628238  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:27.628253  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:27.628267  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:27.646519  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:27.646545  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:27.674935  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:27.674958  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:27.721277  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:27.721306  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:27.741140  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:27.741165  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:27.796676  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:27.789709   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:27.790246   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:27.791825   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:27.792273   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:27.793752   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:27.789709   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:27.790246   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:27.791825   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:27.792273   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:27.793752   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:30.297779  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:30.308987  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:30.327806  687772 logs.go:282] 0 containers: []
	W1223 00:04:30.327827  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:30.327885  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:30.347142  687772 logs.go:282] 0 containers: []
	W1223 00:04:30.347165  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:30.347216  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:30.365629  687772 logs.go:282] 0 containers: []
	W1223 00:04:30.365656  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:30.365729  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:30.383470  687772 logs.go:282] 0 containers: []
	W1223 00:04:30.383496  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:30.383552  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:30.402127  687772 logs.go:282] 0 containers: []
	W1223 00:04:30.402152  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:30.402214  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:30.420681  687772 logs.go:282] 0 containers: []
	W1223 00:04:30.420706  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:30.420757  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:30.439453  687772 logs.go:282] 0 containers: []
	W1223 00:04:30.439475  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:30.439517  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:30.458669  687772 logs.go:282] 0 containers: []
	W1223 00:04:30.458691  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:30.458702  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:30.458713  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:30.505022  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:30.505050  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:30.528295  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:30.528323  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:30.585055  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:30.577823   13062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:30.578469   13062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:30.580017   13062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:30.580458   13062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:30.582038   13062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:30.577823   13062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:30.578469   13062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:30.580017   13062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:30.580458   13062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:30.582038   13062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:30.585076  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:30.585088  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:30.604200  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:30.604229  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:33.131779  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:33.143670  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:33.163179  687772 logs.go:282] 0 containers: []
	W1223 00:04:33.163200  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:33.163245  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:33.182970  687772 logs.go:282] 0 containers: []
	W1223 00:04:33.182992  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:33.183043  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:33.201569  687772 logs.go:282] 0 containers: []
	W1223 00:04:33.201609  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:33.201656  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:33.219907  687772 logs.go:282] 0 containers: []
	W1223 00:04:33.219931  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:33.219989  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:33.239604  687772 logs.go:282] 0 containers: []
	W1223 00:04:33.239630  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:33.239675  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:33.258182  687772 logs.go:282] 0 containers: []
	W1223 00:04:33.258211  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:33.258263  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:33.277606  687772 logs.go:282] 0 containers: []
	W1223 00:04:33.277632  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:33.277678  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:33.297258  687772 logs.go:282] 0 containers: []
	W1223 00:04:33.297283  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:33.297296  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:33.297312  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:33.344903  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:33.344932  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:33.364742  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:33.364768  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:33.420528  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:33.413527   13221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:33.414059   13221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:33.415546   13221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:33.416007   13221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:33.417495   13221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:33.413527   13221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:33.414059   13221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:33.415546   13221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:33.416007   13221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:33.417495   13221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:33.420549  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:33.420560  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:33.439384  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:33.439411  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:35.968903  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:35.980276  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:35.999444  687772 logs.go:282] 0 containers: []
	W1223 00:04:35.999474  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:35.999534  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:36.018792  687772 logs.go:282] 0 containers: []
	W1223 00:04:36.018819  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:36.018880  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:36.036956  687772 logs.go:282] 0 containers: []
	W1223 00:04:36.036985  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:36.037043  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:36.055239  687772 logs.go:282] 0 containers: []
	W1223 00:04:36.055265  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:36.055315  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:36.073241  687772 logs.go:282] 0 containers: []
	W1223 00:04:36.073272  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:36.073325  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:36.091575  687772 logs.go:282] 0 containers: []
	W1223 00:04:36.091613  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:36.091662  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:36.110369  687772 logs.go:282] 0 containers: []
	W1223 00:04:36.110396  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:36.110448  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:36.128481  687772 logs.go:282] 0 containers: []
	W1223 00:04:36.128505  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:36.128516  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:36.128526  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:36.176492  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:36.176526  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:36.196649  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:36.196675  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:36.253201  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:36.245327   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:36.245908   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:36.247446   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:36.247880   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:36.249674   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:36.245327   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:36.245908   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:36.247446   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:36.247880   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:36.249674   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:36.253224  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:36.253241  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:36.273351  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:36.273379  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:38.804411  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:38.815899  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:38.834644  687772 logs.go:282] 0 containers: []
	W1223 00:04:38.834668  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:38.834713  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:38.853892  687772 logs.go:282] 0 containers: []
	W1223 00:04:38.853919  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:38.853967  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:38.871484  687772 logs.go:282] 0 containers: []
	W1223 00:04:38.871505  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:38.871554  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:38.889803  687772 logs.go:282] 0 containers: []
	W1223 00:04:38.889828  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:38.889879  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:38.909558  687772 logs.go:282] 0 containers: []
	W1223 00:04:38.909586  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:38.909652  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:38.929528  687772 logs.go:282] 0 containers: []
	W1223 00:04:38.929553  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:38.929624  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:38.948153  687772 logs.go:282] 0 containers: []
	W1223 00:04:38.948181  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:38.948241  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:38.966657  687772 logs.go:282] 0 containers: []
	W1223 00:04:38.966679  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:38.966689  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:38.966711  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:38.994610  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:38.994637  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:39.040694  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:39.040722  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:39.060391  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:39.060417  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:39.116169  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:39.108908   13569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:39.109405   13569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:39.111037   13569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:39.111517   13569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:39.113022   13569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:39.108908   13569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:39.109405   13569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:39.111037   13569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:39.111517   13569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:39.113022   13569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:39.116189  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:39.116201  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:41.638009  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:41.650427  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:41.670214  687772 logs.go:282] 0 containers: []
	W1223 00:04:41.670241  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:41.670289  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:41.689539  687772 logs.go:282] 0 containers: []
	W1223 00:04:41.689568  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:41.689651  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:41.708449  687772 logs.go:282] 0 containers: []
	W1223 00:04:41.708472  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:41.708520  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:41.727897  687772 logs.go:282] 0 containers: []
	W1223 00:04:41.727918  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:41.727963  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:41.748169  687772 logs.go:282] 0 containers: []
	W1223 00:04:41.748200  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:41.748252  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:41.767148  687772 logs.go:282] 0 containers: []
	W1223 00:04:41.767172  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:41.767224  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:41.789562  687772 logs.go:282] 0 containers: []
	W1223 00:04:41.789589  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:41.789665  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:41.808259  687772 logs.go:282] 0 containers: []
	W1223 00:04:41.808281  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:41.808292  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:41.808304  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:41.827093  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:41.827120  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:41.854644  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:41.854671  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:41.901960  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:41.901995  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:41.921983  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:41.922011  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:41.978723  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:41.971457   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:41.971976   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:41.973486   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:41.973968   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:41.975679   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:41.971457   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:41.971976   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:41.973486   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:41.973968   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:41.975679   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:44.479583  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:44.491055  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:44.513749  687772 logs.go:282] 0 containers: []
	W1223 00:04:44.513779  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:44.513836  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:44.535619  687772 logs.go:282] 0 containers: []
	W1223 00:04:44.535648  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:44.535722  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:44.555441  687772 logs.go:282] 0 containers: []
	W1223 00:04:44.555464  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:44.555512  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:44.574828  687772 logs.go:282] 0 containers: []
	W1223 00:04:44.574851  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:44.574895  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:44.593270  687772 logs.go:282] 0 containers: []
	W1223 00:04:44.593293  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:44.593350  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:44.612157  687772 logs.go:282] 0 containers: []
	W1223 00:04:44.612182  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:44.612239  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:44.630342  687772 logs.go:282] 0 containers: []
	W1223 00:04:44.630366  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:44.630417  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:44.648864  687772 logs.go:282] 0 containers: []
	W1223 00:04:44.648893  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:44.648905  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:44.648917  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:44.698462  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:44.698494  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:44.718432  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:44.718463  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:44.777738  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:44.767938   13879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:44.768520   13879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:44.770129   13879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:44.770658   13879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:44.773585   13879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:44.767938   13879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:44.768520   13879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:44.770129   13879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:44.770658   13879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:44.773585   13879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:44.777764  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:44.777781  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:44.798488  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:44.798522  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:47.328787  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:47.340091  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:47.359764  687772 logs.go:282] 0 containers: []
	W1223 00:04:47.359786  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:47.359834  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:47.378531  687772 logs.go:282] 0 containers: []
	W1223 00:04:47.378557  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:47.378633  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:47.397279  687772 logs.go:282] 0 containers: []
	W1223 00:04:47.397303  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:47.397351  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:47.415379  687772 logs.go:282] 0 containers: []
	W1223 00:04:47.415404  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:47.415449  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:47.433342  687772 logs.go:282] 0 containers: []
	W1223 00:04:47.433363  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:47.433407  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:47.452134  687772 logs.go:282] 0 containers: []
	W1223 00:04:47.452153  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:47.452195  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:47.470492  687772 logs.go:282] 0 containers: []
	W1223 00:04:47.470514  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:47.470565  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:47.489435  687772 logs.go:282] 0 containers: []
	W1223 00:04:47.489462  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:47.489475  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:47.489490  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:47.543310  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:47.543341  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:47.563678  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:47.563716  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:47.618877  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:47.611492   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:47.612043   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:47.613658   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:47.614136   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:47.615686   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:47.611492   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:47.612043   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:47.613658   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:47.614136   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:47.615686   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:47.618902  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:47.618916  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:47.637117  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:47.637142  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:50.165288  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:50.176485  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:50.195504  687772 logs.go:282] 0 containers: []
	W1223 00:04:50.195530  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:50.195573  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:50.214411  687772 logs.go:282] 0 containers: []
	W1223 00:04:50.214435  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:50.214486  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:50.232050  687772 logs.go:282] 0 containers: []
	W1223 00:04:50.232073  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:50.232113  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:50.249723  687772 logs.go:282] 0 containers: []
	W1223 00:04:50.249747  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:50.249805  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:50.269197  687772 logs.go:282] 0 containers: []
	W1223 00:04:50.269220  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:50.269262  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:50.287018  687772 logs.go:282] 0 containers: []
	W1223 00:04:50.287042  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:50.287084  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:50.304852  687772 logs.go:282] 0 containers: []
	W1223 00:04:50.304876  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:50.304923  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:50.323126  687772 logs.go:282] 0 containers: []
	W1223 00:04:50.323150  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:50.323164  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:50.323177  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:50.371303  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:50.371328  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:50.391396  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:50.391419  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:50.446479  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:50.439351   14214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:50.440054   14214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:50.441636   14214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:50.442091   14214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:50.443655   14214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:50.439351   14214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:50.440054   14214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:50.441636   14214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:50.442091   14214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:50.443655   14214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:50.446503  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:50.446519  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:50.466869  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:50.466895  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:53.004783  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:53.016488  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:53.037102  687772 logs.go:282] 0 containers: []
	W1223 00:04:53.037130  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:53.037175  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:53.056487  687772 logs.go:282] 0 containers: []
	W1223 00:04:53.056509  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:53.056551  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:53.074919  687772 logs.go:282] 0 containers: []
	W1223 00:04:53.074938  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:53.074983  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:53.093142  687772 logs.go:282] 0 containers: []
	W1223 00:04:53.093163  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:53.093203  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:53.112007  687772 logs.go:282] 0 containers: []
	W1223 00:04:53.112030  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:53.112079  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:53.130737  687772 logs.go:282] 0 containers: []
	W1223 00:04:53.130759  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:53.130802  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:53.149980  687772 logs.go:282] 0 containers: []
	W1223 00:04:53.150009  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:53.150057  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:53.167468  687772 logs.go:282] 0 containers: []
	W1223 00:04:53.167493  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:53.167503  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:53.167513  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:53.195775  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:53.195800  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:53.243212  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:53.243238  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:53.263047  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:53.263073  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:53.319009  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:53.311761   14403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:53.312309   14403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:53.313970   14403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:53.314449   14403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:53.315934   14403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:53.311761   14403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:53.312309   14403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:53.313970   14403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:53.314449   14403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:53.315934   14403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:53.319029  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:53.319041  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:55.838963  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:55.850169  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:55.868811  687772 logs.go:282] 0 containers: []
	W1223 00:04:55.868833  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:55.868878  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:55.887281  687772 logs.go:282] 0 containers: []
	W1223 00:04:55.887309  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:55.887361  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:55.905343  687772 logs.go:282] 0 containers: []
	W1223 00:04:55.905372  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:55.905425  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:55.922787  687772 logs.go:282] 0 containers: []
	W1223 00:04:55.922811  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:55.922858  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:55.941063  687772 logs.go:282] 0 containers: []
	W1223 00:04:55.941090  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:55.941143  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:55.960388  687772 logs.go:282] 0 containers: []
	W1223 00:04:55.960413  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:55.960549  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:55.978787  687772 logs.go:282] 0 containers: []
	W1223 00:04:55.978810  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:55.978854  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:55.996489  687772 logs.go:282] 0 containers: []
	W1223 00:04:55.996516  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:55.996530  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:55.996542  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:56.048197  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:56.048229  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:56.068640  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:56.068668  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:56.124436  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:56.117357   14557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:56.117926   14557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:56.119480   14557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:56.119940   14557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:56.121489   14557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:56.117357   14557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:56.117926   14557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:56.119480   14557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:56.119940   14557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:56.121489   14557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:56.124461  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:56.124478  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:56.143079  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:56.143102  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:58.672032  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:58.683539  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:58.702739  687772 logs.go:282] 0 containers: []
	W1223 00:04:58.702762  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:58.702814  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:58.721434  687772 logs.go:282] 0 containers: []
	W1223 00:04:58.721465  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:58.721514  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:58.741740  687772 logs.go:282] 0 containers: []
	W1223 00:04:58.741768  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:58.741811  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:58.760960  687772 logs.go:282] 0 containers: []
	W1223 00:04:58.760982  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:58.761035  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:58.780979  687772 logs.go:282] 0 containers: []
	W1223 00:04:58.781001  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:58.781045  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:58.799417  687772 logs.go:282] 0 containers: []
	W1223 00:04:58.799453  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:58.799501  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:58.817985  687772 logs.go:282] 0 containers: []
	W1223 00:04:58.818007  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:58.818051  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:58.837633  687772 logs.go:282] 0 containers: []
	W1223 00:04:58.837659  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:58.837671  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:58.837683  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:58.856421  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:58.856448  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:58.883550  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:58.883574  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:58.932130  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:58.932158  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:58.953160  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:58.953189  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:59.009951  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:59.002105   14731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:59.002736   14731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:59.004318   14731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:59.004809   14731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:59.006430   14731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:59.002105   14731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:59.002736   14731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:59.004318   14731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:59.004809   14731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:59.006430   14731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:01.512529  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:01.523921  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:01.542499  687772 logs.go:282] 0 containers: []
	W1223 00:05:01.542525  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:01.542569  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:01.560824  687772 logs.go:282] 0 containers: []
	W1223 00:05:01.560850  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:01.560892  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:01.578994  687772 logs.go:282] 0 containers: []
	W1223 00:05:01.579017  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:01.579060  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:01.597267  687772 logs.go:282] 0 containers: []
	W1223 00:05:01.597293  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:01.597346  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:01.615860  687772 logs.go:282] 0 containers: []
	W1223 00:05:01.615880  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:01.615919  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:01.635022  687772 logs.go:282] 0 containers: []
	W1223 00:05:01.635045  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:01.635084  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:01.654257  687772 logs.go:282] 0 containers: []
	W1223 00:05:01.654282  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:01.654338  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:01.672470  687772 logs.go:282] 0 containers: []
	W1223 00:05:01.672492  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:01.672502  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:01.672513  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:01.720496  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:01.720525  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:01.740698  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:01.740724  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:01.800538  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:01.793437   14884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:01.794074   14884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:01.795170   14884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:01.795640   14884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:01.797188   14884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:01.793437   14884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:01.794074   14884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:01.795170   14884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:01.795640   14884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:01.797188   14884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:01.800562  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:01.800579  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:01.820265  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:01.820291  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:04.348938  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:04.360190  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:04.379095  687772 logs.go:282] 0 containers: []
	W1223 00:05:04.379124  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:04.379177  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:04.396991  687772 logs.go:282] 0 containers: []
	W1223 00:05:04.397012  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:04.397057  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:04.415658  687772 logs.go:282] 0 containers: []
	W1223 00:05:04.415682  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:04.415750  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:04.434023  687772 logs.go:282] 0 containers: []
	W1223 00:05:04.434049  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:04.434093  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:04.452721  687772 logs.go:282] 0 containers: []
	W1223 00:05:04.452744  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:04.452791  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:04.471221  687772 logs.go:282] 0 containers: []
	W1223 00:05:04.471247  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:04.471294  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:04.489656  687772 logs.go:282] 0 containers: []
	W1223 00:05:04.489685  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:04.489734  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:04.508637  687772 logs.go:282] 0 containers: []
	W1223 00:05:04.508669  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:04.508689  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:04.508702  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:04.526928  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:04.526953  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:04.553896  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:04.553923  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:04.602972  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:04.602999  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:04.622788  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:04.622812  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:04.678232  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:04.670559   15068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:04.671188   15068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:04.672874   15068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:04.673311   15068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:04.675032   15068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:04.670559   15068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:04.671188   15068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:04.672874   15068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:04.673311   15068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:04.675032   15068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:07.179923  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:07.191963  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:07.211239  687772 logs.go:282] 0 containers: []
	W1223 00:05:07.211263  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:07.211304  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:07.230281  687772 logs.go:282] 0 containers: []
	W1223 00:05:07.230302  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:07.230343  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:07.249365  687772 logs.go:282] 0 containers: []
	W1223 00:05:07.249391  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:07.249443  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:07.269410  687772 logs.go:282] 0 containers: []
	W1223 00:05:07.269431  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:07.269484  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:07.288681  687772 logs.go:282] 0 containers: []
	W1223 00:05:07.288711  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:07.288756  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:07.307722  687772 logs.go:282] 0 containers: []
	W1223 00:05:07.307742  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:07.307785  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:07.324479  687772 logs.go:282] 0 containers: []
	W1223 00:05:07.324503  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:07.324557  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:07.343010  687772 logs.go:282] 0 containers: []
	W1223 00:05:07.343030  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:07.343041  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:07.343056  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:07.370090  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:07.370116  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:07.416268  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:07.416294  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:07.436063  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:07.436088  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:07.492624  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:07.485207   15232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:07.485853   15232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:07.487473   15232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:07.488003   15232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:07.489566   15232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:07.485207   15232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:07.485853   15232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:07.487473   15232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:07.488003   15232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:07.489566   15232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:07.492650  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:07.492667  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:10.011735  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:10.025412  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:10.046816  687772 logs.go:282] 0 containers: []
	W1223 00:05:10.046848  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:10.046917  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:10.065664  687772 logs.go:282] 0 containers: []
	W1223 00:05:10.065693  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:10.065752  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:10.084486  687772 logs.go:282] 0 containers: []
	W1223 00:05:10.084512  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:10.084569  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:10.103489  687772 logs.go:282] 0 containers: []
	W1223 00:05:10.103510  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:10.103563  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:10.121383  687772 logs.go:282] 0 containers: []
	W1223 00:05:10.121413  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:10.121457  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:10.139817  687772 logs.go:282] 0 containers: []
	W1223 00:05:10.139840  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:10.139883  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:10.158123  687772 logs.go:282] 0 containers: []
	W1223 00:05:10.158142  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:10.158195  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:10.176690  687772 logs.go:282] 0 containers: []
	W1223 00:05:10.176714  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:10.176728  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:10.176743  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:10.221786  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:10.221818  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:10.241642  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:10.241670  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:10.306092  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:10.298846   15375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:10.299321   15375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:10.300928   15375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:10.301392   15375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:10.302935   15375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:10.298846   15375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:10.299321   15375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:10.300928   15375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:10.301392   15375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:10.302935   15375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:10.306110  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:10.306122  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:10.325227  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:10.325254  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:12.853199  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:12.864559  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:12.883528  687772 logs.go:282] 0 containers: []
	W1223 00:05:12.883553  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:12.883615  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:12.901914  687772 logs.go:282] 0 containers: []
	W1223 00:05:12.901946  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:12.902003  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:12.920676  687772 logs.go:282] 0 containers: []
	W1223 00:05:12.920703  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:12.920746  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:12.938812  687772 logs.go:282] 0 containers: []
	W1223 00:05:12.938840  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:12.938898  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:12.956564  687772 logs.go:282] 0 containers: []
	W1223 00:05:12.956588  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:12.956651  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:12.975030  687772 logs.go:282] 0 containers: []
	W1223 00:05:12.975056  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:12.975112  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:12.992748  687772 logs.go:282] 0 containers: []
	W1223 00:05:12.992770  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:12.992819  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:13.013710  687772 logs.go:282] 0 containers: []
	W1223 00:05:13.013733  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:13.013744  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:13.013756  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:13.044889  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:13.044920  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:13.090565  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:13.090611  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:13.110578  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:13.110614  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:13.166048  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:13.158806   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:13.159417   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:13.161011   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:13.161480   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:13.163044   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:13.158806   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:13.159417   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:13.161011   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:13.161480   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:13.163044   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:13.166066  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:13.166079  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:15.685941  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:15.697434  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:15.716560  687772 logs.go:282] 0 containers: []
	W1223 00:05:15.716607  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:15.716664  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:15.735775  687772 logs.go:282] 0 containers: []
	W1223 00:05:15.735799  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:15.735847  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:15.753974  687772 logs.go:282] 0 containers: []
	W1223 00:05:15.753996  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:15.754046  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:15.771763  687772 logs.go:282] 0 containers: []
	W1223 00:05:15.771788  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:15.771846  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:15.790222  687772 logs.go:282] 0 containers: []
	W1223 00:05:15.790249  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:15.790294  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:15.808671  687772 logs.go:282] 0 containers: []
	W1223 00:05:15.808691  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:15.808735  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:15.827295  687772 logs.go:282] 0 containers: []
	W1223 00:05:15.827324  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:15.827377  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:15.845637  687772 logs.go:282] 0 containers: []
	W1223 00:05:15.845658  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:15.845668  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:15.845679  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:15.892975  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:15.893004  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:15.912599  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:15.912626  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:15.967763  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:15.960925   15710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:15.961478   15710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:15.963006   15710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:15.963401   15710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:15.964895   15710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:15.960925   15710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:15.961478   15710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:15.963006   15710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:15.963401   15710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:15.964895   15710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:15.967788  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:15.967801  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:15.986603  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:15.986632  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:18.516732  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:18.529415  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:18.549048  687772 logs.go:282] 0 containers: []
	W1223 00:05:18.549069  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:18.549113  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:18.567672  687772 logs.go:282] 0 containers: []
	W1223 00:05:18.567705  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:18.567771  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:18.586513  687772 logs.go:282] 0 containers: []
	W1223 00:05:18.586538  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:18.586613  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:18.604518  687772 logs.go:282] 0 containers: []
	W1223 00:05:18.604538  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:18.604579  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:18.623446  687772 logs.go:282] 0 containers: []
	W1223 00:05:18.623467  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:18.623510  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:18.642213  687772 logs.go:282] 0 containers: []
	W1223 00:05:18.642230  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:18.642279  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:18.660501  687772 logs.go:282] 0 containers: []
	W1223 00:05:18.660521  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:18.660563  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:18.678846  687772 logs.go:282] 0 containers: []
	W1223 00:05:18.678869  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:18.678882  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:18.678893  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:18.727936  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:18.727965  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:18.749033  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:18.749059  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:18.804351  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:18.796992   15875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:18.797493   15875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:18.799074   15875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:18.799516   15875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:18.801045   15875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:18.796992   15875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:18.797493   15875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:18.799074   15875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:18.799516   15875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:18.801045   15875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:18.804386  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:18.804401  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:18.822650  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:18.822681  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:21.351938  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:21.363094  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:21.382091  687772 logs.go:282] 0 containers: []
	W1223 00:05:21.382123  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:21.382179  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:21.400790  687772 logs.go:282] 0 containers: []
	W1223 00:05:21.400813  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:21.400861  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:21.418989  687772 logs.go:282] 0 containers: []
	W1223 00:05:21.419014  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:21.419060  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:21.437814  687772 logs.go:282] 0 containers: []
	W1223 00:05:21.437839  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:21.437898  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:21.456967  687772 logs.go:282] 0 containers: []
	W1223 00:05:21.456991  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:21.457045  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:21.475541  687772 logs.go:282] 0 containers: []
	W1223 00:05:21.475566  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:21.475644  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:21.494493  687772 logs.go:282] 0 containers: []
	W1223 00:05:21.494518  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:21.494576  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:21.513952  687772 logs.go:282] 0 containers: []
	W1223 00:05:21.513979  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:21.513990  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:21.514001  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:21.563253  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:21.563283  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:21.583663  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:21.583693  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:21.638754  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:21.631703   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:21.632235   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:21.633835   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:21.634263   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:21.635800   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:21.631703   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:21.632235   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:21.633835   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:21.634263   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:21.635800   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:21.638774  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:21.638786  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:21.657674  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:21.657704  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:24.188905  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:24.200277  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:24.220108  687772 logs.go:282] 0 containers: []
	W1223 00:05:24.220133  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:24.220188  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:24.240286  687772 logs.go:282] 0 containers: []
	W1223 00:05:24.240307  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:24.240351  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:24.260644  687772 logs.go:282] 0 containers: []
	W1223 00:05:24.260670  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:24.260724  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:24.282918  687772 logs.go:282] 0 containers: []
	W1223 00:05:24.282943  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:24.282990  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:24.302929  687772 logs.go:282] 0 containers: []
	W1223 00:05:24.302956  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:24.303013  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:24.322124  687772 logs.go:282] 0 containers: []
	W1223 00:05:24.322145  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:24.322196  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:24.340965  687772 logs.go:282] 0 containers: []
	W1223 00:05:24.340993  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:24.341050  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:24.360121  687772 logs.go:282] 0 containers: []
	W1223 00:05:24.360148  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:24.360162  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:24.360177  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:24.406776  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:24.406809  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:24.428882  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:24.428909  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:24.484257  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:24.477184   16205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:24.477734   16205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:24.479261   16205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:24.479752   16205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:24.481241   16205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:24.477184   16205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:24.477734   16205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:24.479261   16205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:24.479752   16205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:24.481241   16205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:24.484286  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:24.484304  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:24.504724  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:24.504752  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:27.038561  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:27.050259  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:27.069265  687772 logs.go:282] 0 containers: []
	W1223 00:05:27.069288  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:27.069333  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:27.088081  687772 logs.go:282] 0 containers: []
	W1223 00:05:27.088108  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:27.088171  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:27.107172  687772 logs.go:282] 0 containers: []
	W1223 00:05:27.107198  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:27.107246  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:27.125773  687772 logs.go:282] 0 containers: []
	W1223 00:05:27.125804  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:27.125862  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:27.144259  687772 logs.go:282] 0 containers: []
	W1223 00:05:27.144282  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:27.144339  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:27.163197  687772 logs.go:282] 0 containers: []
	W1223 00:05:27.163217  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:27.163263  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:27.181942  687772 logs.go:282] 0 containers: []
	W1223 00:05:27.181971  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:27.182030  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:27.199936  687772 logs.go:282] 0 containers: []
	W1223 00:05:27.199964  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:27.199980  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:27.199996  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:27.218431  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:27.218456  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:27.246756  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:27.246783  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:27.297557  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:27.297603  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:27.318177  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:27.318205  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:27.374968  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:27.367760   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:27.368359   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:27.369972   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:27.370370   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:27.371924   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:27.367760   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:27.368359   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:27.369972   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:27.370370   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:27.371924   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:29.875712  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:29.887100  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:29.906809  687772 logs.go:282] 0 containers: []
	W1223 00:05:29.906834  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:29.906892  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:29.926388  687772 logs.go:282] 0 containers: []
	W1223 00:05:29.926414  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:29.926467  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:29.946220  687772 logs.go:282] 0 containers: []
	W1223 00:05:29.946248  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:29.946302  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:29.967102  687772 logs.go:282] 0 containers: []
	W1223 00:05:29.967131  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:29.967188  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:29.986540  687772 logs.go:282] 0 containers: []
	W1223 00:05:29.986564  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:29.986631  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:30.004809  687772 logs.go:282] 0 containers: []
	W1223 00:05:30.004835  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:30.004881  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:30.023625  687772 logs.go:282] 0 containers: []
	W1223 00:05:30.023655  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:30.023711  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:30.042067  687772 logs.go:282] 0 containers: []
	W1223 00:05:30.042089  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:30.042100  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:30.042120  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:30.061885  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:30.061913  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:30.090401  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:30.090432  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:30.138962  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:30.138993  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:30.159224  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:30.159250  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:30.216295  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:30.208699   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:30.209372   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:30.211074   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:30.211516   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:30.213098   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:30.208699   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:30.209372   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:30.211074   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:30.211516   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:30.213098   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:32.716974  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:32.728432  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:32.748217  687772 logs.go:282] 0 containers: []
	W1223 00:05:32.748245  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:32.748292  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:32.767866  687772 logs.go:282] 0 containers: []
	W1223 00:05:32.767887  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:32.767935  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:32.788690  687772 logs.go:282] 0 containers: []
	W1223 00:05:32.788723  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:32.788782  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:32.808366  687772 logs.go:282] 0 containers: []
	W1223 00:05:32.808397  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:32.808460  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:32.827631  687772 logs.go:282] 0 containers: []
	W1223 00:05:32.827655  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:32.827714  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:32.846429  687772 logs.go:282] 0 containers: []
	W1223 00:05:32.846456  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:32.846511  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:32.865177  687772 logs.go:282] 0 containers: []
	W1223 00:05:32.865202  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:32.865258  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:32.885235  687772 logs.go:282] 0 containers: []
	W1223 00:05:32.885258  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:32.885268  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:32.885280  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:32.905218  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:32.905245  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:32.960860  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:32.953652   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:32.954228   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:32.955802   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:32.956269   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:32.957894   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:32.953652   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:32.954228   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:32.955802   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:32.956269   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:32.957894   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:32.960885  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:32.960905  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:32.979917  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:32.979943  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:33.008187  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:33.008218  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:35.555359  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:35.566888  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:35.586562  687772 logs.go:282] 0 containers: []
	W1223 00:05:35.586588  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:35.586657  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:35.605495  687772 logs.go:282] 0 containers: []
	W1223 00:05:35.605522  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:35.605579  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:35.624671  687772 logs.go:282] 0 containers: []
	W1223 00:05:35.624700  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:35.624760  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:35.643198  687772 logs.go:282] 0 containers: []
	W1223 00:05:35.643222  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:35.643278  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:35.662223  687772 logs.go:282] 0 containers: []
	W1223 00:05:35.662245  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:35.662290  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:35.681991  687772 logs.go:282] 0 containers: []
	W1223 00:05:35.682016  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:35.682071  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:35.700985  687772 logs.go:282] 0 containers: []
	W1223 00:05:35.701009  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:35.701062  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:35.719976  687772 logs.go:282] 0 containers: []
	W1223 00:05:35.720000  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:35.720015  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:35.720029  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:35.767694  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:35.767728  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:35.792896  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:35.792935  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:35.849448  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:35.842971   16872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:35.843476   16872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:35.845024   16872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:35.845404   16872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:35.846511   16872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:35.842971   16872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:35.843476   16872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:35.845024   16872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:35.845404   16872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:35.846511   16872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:35.849470  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:35.849491  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:35.868248  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:35.868274  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:38.397175  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:38.408856  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:38.428054  687772 logs.go:282] 0 containers: []
	W1223 00:05:38.428085  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:38.428141  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:38.447350  687772 logs.go:282] 0 containers: []
	W1223 00:05:38.447376  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:38.447428  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:38.466426  687772 logs.go:282] 0 containers: []
	W1223 00:05:38.466455  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:38.466512  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:38.486074  687772 logs.go:282] 0 containers: []
	W1223 00:05:38.486104  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:38.486173  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:38.505584  687772 logs.go:282] 0 containers: []
	W1223 00:05:38.505626  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:38.505709  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:38.527387  687772 logs.go:282] 0 containers: []
	W1223 00:05:38.527416  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:38.527473  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:38.547928  687772 logs.go:282] 0 containers: []
	W1223 00:05:38.547955  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:38.548015  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:38.568237  687772 logs.go:282] 0 containers: []
	W1223 00:05:38.568262  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:38.568274  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:38.568285  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:38.616522  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:38.616555  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:38.638676  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:38.638707  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:38.694984  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:38.687773   17030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:38.688337   17030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:38.689839   17030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:38.690288   17030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:38.691876   17030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:38.687773   17030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:38.688337   17030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:38.689839   17030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:38.690288   17030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:38.691876   17030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:38.695006  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:38.695019  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:38.713940  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:38.713969  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:41.244859  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:41.256283  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:41.275201  687772 logs.go:282] 0 containers: []
	W1223 00:05:41.275233  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:41.275280  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:41.295272  687772 logs.go:282] 0 containers: []
	W1223 00:05:41.295299  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:41.295353  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:41.313039  687772 logs.go:282] 0 containers: []
	W1223 00:05:41.313069  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:41.313135  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:41.331394  687772 logs.go:282] 0 containers: []
	W1223 00:05:41.331418  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:41.331491  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:41.350556  687772 logs.go:282] 0 containers: []
	W1223 00:05:41.350583  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:41.350650  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:41.369215  687772 logs.go:282] 0 containers: []
	W1223 00:05:41.369242  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:41.369290  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:41.387799  687772 logs.go:282] 0 containers: []
	W1223 00:05:41.387826  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:41.387877  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:41.406760  687772 logs.go:282] 0 containers: []
	W1223 00:05:41.406785  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:41.406799  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:41.406813  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:41.453518  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:41.453548  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:41.473671  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:41.473700  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:41.531098  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:41.523365   17203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:41.523912   17203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:41.525536   17203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:41.526073   17203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:41.527560   17203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:41.523365   17203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:41.523912   17203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:41.525536   17203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:41.526073   17203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:41.527560   17203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:41.531124  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:41.531139  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:41.551968  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:41.551997  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:44.081115  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:44.092382  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:44.111299  687772 logs.go:282] 0 containers: []
	W1223 00:05:44.111326  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:44.111381  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:44.130168  687772 logs.go:282] 0 containers: []
	W1223 00:05:44.130196  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:44.130250  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:44.149028  687772 logs.go:282] 0 containers: []
	W1223 00:05:44.149052  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:44.149109  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:44.167326  687772 logs.go:282] 0 containers: []
	W1223 00:05:44.167346  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:44.167388  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:44.185875  687772 logs.go:282] 0 containers: []
	W1223 00:05:44.185898  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:44.185949  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:44.205297  687772 logs.go:282] 0 containers: []
	W1223 00:05:44.205320  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:44.205370  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:44.224561  687772 logs.go:282] 0 containers: []
	W1223 00:05:44.224608  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:44.224661  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:44.242760  687772 logs.go:282] 0 containers: []
	W1223 00:05:44.242782  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:44.242795  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:44.242808  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:44.290363  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:44.290399  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:44.310780  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:44.310806  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:44.367913  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:44.360501   17368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:44.361124   17368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:44.362755   17368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:44.363237   17368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:44.364761   17368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:44.360501   17368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:44.361124   17368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:44.362755   17368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:44.363237   17368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:44.364761   17368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:44.367931  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:44.367945  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:44.387052  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:44.387080  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:46.916305  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:46.927926  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:46.946856  687772 logs.go:282] 0 containers: []
	W1223 00:05:46.946882  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:46.946941  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:46.965651  687772 logs.go:282] 0 containers: []
	W1223 00:05:46.965674  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:46.965720  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:46.984835  687772 logs.go:282] 0 containers: []
	W1223 00:05:46.984863  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:46.984920  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:47.005005  687772 logs.go:282] 0 containers: []
	W1223 00:05:47.005033  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:47.005095  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:47.026916  687772 logs.go:282] 0 containers: []
	W1223 00:05:47.026948  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:47.026996  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:47.047971  687772 logs.go:282] 0 containers: []
	W1223 00:05:47.048003  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:47.048064  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:47.067344  687772 logs.go:282] 0 containers: []
	W1223 00:05:47.067372  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:47.067424  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:47.087055  687772 logs.go:282] 0 containers: []
	W1223 00:05:47.087079  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:47.087093  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:47.087107  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:47.134052  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:47.134085  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:47.154446  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:47.154479  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:47.210710  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:47.203541   17534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:47.204152   17534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:47.205769   17534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:47.206170   17534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:47.207683   17534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:47.203541   17534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:47.204152   17534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:47.205769   17534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:47.206170   17534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:47.207683   17534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:47.210734  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:47.210746  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:47.230988  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:47.231017  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:49.759465  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:49.771325  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:49.791131  687772 logs.go:282] 0 containers: []
	W1223 00:05:49.791160  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:49.791219  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:49.810792  687772 logs.go:282] 0 containers: []
	W1223 00:05:49.810814  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:49.810859  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:49.829432  687772 logs.go:282] 0 containers: []
	W1223 00:05:49.829454  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:49.829499  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:49.847527  687772 logs.go:282] 0 containers: []
	W1223 00:05:49.847548  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:49.847603  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:49.866252  687772 logs.go:282] 0 containers: []
	W1223 00:05:49.866275  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:49.866315  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:49.885934  687772 logs.go:282] 0 containers: []
	W1223 00:05:49.885955  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:49.885996  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:49.903668  687772 logs.go:282] 0 containers: []
	W1223 00:05:49.903690  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:49.903733  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:49.923276  687772 logs.go:282] 0 containers: []
	W1223 00:05:49.923298  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:49.923309  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:49.923320  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:49.968185  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:49.968217  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:49.988993  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:49.989021  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:50.052060  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:50.045040   17695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:50.045626   17695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:50.047194   17695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:50.047655   17695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:50.049139   17695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:50.045040   17695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:50.045626   17695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:50.047194   17695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:50.047655   17695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:50.049139   17695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:50.052083  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:50.052100  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:50.070860  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:50.070885  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:52.599679  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:52.611289  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:52.629699  687772 logs.go:282] 0 containers: []
	W1223 00:05:52.629724  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:52.629782  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:52.648660  687772 logs.go:282] 0 containers: []
	W1223 00:05:52.648689  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:52.648740  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:52.667204  687772 logs.go:282] 0 containers: []
	W1223 00:05:52.667232  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:52.667287  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:52.685635  687772 logs.go:282] 0 containers: []
	W1223 00:05:52.685667  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:52.685718  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:52.703669  687772 logs.go:282] 0 containers: []
	W1223 00:05:52.703692  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:52.703742  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:52.721467  687772 logs.go:282] 0 containers: []
	W1223 00:05:52.721495  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:52.721553  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:52.739858  687772 logs.go:282] 0 containers: []
	W1223 00:05:52.739885  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:52.739930  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:52.759123  687772 logs.go:282] 0 containers: []
	W1223 00:05:52.759151  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:52.759165  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:52.759178  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:52.812520  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:52.812552  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:52.832551  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:52.832578  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:52.887680  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:52.880327   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:52.880960   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:52.882578   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:52.883148   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:52.884700   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:52.880327   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:52.880960   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:52.882578   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:52.883148   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:52.884700   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:52.887700  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:52.887719  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:52.906246  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:52.906276  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:55.444344  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:55.455763  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:55.475305  687772 logs.go:282] 0 containers: []
	W1223 00:05:55.475332  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:55.475389  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:55.494094  687772 logs.go:282] 0 containers: []
	W1223 00:05:55.494117  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:55.494164  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:55.511874  687772 logs.go:282] 0 containers: []
	W1223 00:05:55.511896  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:55.511942  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:55.530088  687772 logs.go:282] 0 containers: []
	W1223 00:05:55.530113  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:55.530159  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:55.548749  687772 logs.go:282] 0 containers: []
	W1223 00:05:55.548778  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:55.548828  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:55.567179  687772 logs.go:282] 0 containers: []
	W1223 00:05:55.567204  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:55.567269  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:55.586315  687772 logs.go:282] 0 containers: []
	W1223 00:05:55.586343  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:55.586395  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:55.605282  687772 logs.go:282] 0 containers: []
	W1223 00:05:55.605303  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:55.605314  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:55.605327  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:55.624085  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:55.624113  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:55.652038  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:55.652065  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:55.699247  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:55.699274  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:55.719031  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:55.719058  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:55.777078  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:55.769272   18055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:55.769828   18055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:55.771491   18055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:55.771926   18055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:55.773469   18055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:55.769272   18055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:55.769828   18055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:55.771491   18055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:55.771926   18055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:55.773469   18055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:58.278708  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:58.291024  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:58.310944  687772 logs.go:282] 0 containers: []
	W1223 00:05:58.310971  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:58.311027  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:58.329419  687772 logs.go:282] 0 containers: []
	W1223 00:05:58.329443  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:58.329499  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:58.346556  687772 logs.go:282] 0 containers: []
	W1223 00:05:58.346579  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:58.346653  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:58.364565  687772 logs.go:282] 0 containers: []
	W1223 00:05:58.364601  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:58.364653  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:58.383020  687772 logs.go:282] 0 containers: []
	W1223 00:05:58.383043  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:58.383089  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:58.401354  687772 logs.go:282] 0 containers: []
	W1223 00:05:58.401381  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:58.401440  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:58.419356  687772 logs.go:282] 0 containers: []
	W1223 00:05:58.419377  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:58.419426  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:58.438428  687772 logs.go:282] 0 containers: []
	W1223 00:05:58.438449  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:58.438461  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:58.438477  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:58.458325  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:58.458353  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:58.513127  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:58.506001   18204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:58.506523   18204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:58.508086   18204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:58.508549   18204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:58.510071   18204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:58.506001   18204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:58.506523   18204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:58.508086   18204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:58.508549   18204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:58.510071   18204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:58.513156  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:58.513173  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:58.532159  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:58.532183  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:58.559409  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:58.559433  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:01.105933  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:01.117378  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:01.136395  687772 logs.go:282] 0 containers: []
	W1223 00:06:01.136418  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:01.136463  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:01.155037  687772 logs.go:282] 0 containers: []
	W1223 00:06:01.155063  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:01.155111  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:01.173939  687772 logs.go:282] 0 containers: []
	W1223 00:06:01.173960  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:01.174004  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:01.193250  687772 logs.go:282] 0 containers: []
	W1223 00:06:01.193271  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:01.193312  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:01.210927  687772 logs.go:282] 0 containers: []
	W1223 00:06:01.210948  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:01.210990  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:01.229293  687772 logs.go:282] 0 containers: []
	W1223 00:06:01.229319  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:01.229367  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:01.247971  687772 logs.go:282] 0 containers: []
	W1223 00:06:01.247997  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:01.248059  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:01.267642  687772 logs.go:282] 0 containers: []
	W1223 00:06:01.267667  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:01.267688  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:01.267718  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:01.290552  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:01.290581  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:01.346096  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:01.339164   18374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:01.339647   18374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:01.341218   18374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:01.341667   18374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:01.343153   18374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:01.339164   18374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:01.339647   18374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:01.341218   18374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:01.341667   18374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:01.343153   18374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:01.346115  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:01.346127  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:01.364490  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:01.364516  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:01.391895  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:01.391918  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:03.938979  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:03.950393  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:03.969334  687772 logs.go:282] 0 containers: []
	W1223 00:06:03.969364  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:03.969448  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:03.988183  687772 logs.go:282] 0 containers: []
	W1223 00:06:03.988205  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:03.988252  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:04.007742  687772 logs.go:282] 0 containers: []
	W1223 00:06:04.007767  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:04.007821  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:04.027502  687772 logs.go:282] 0 containers: []
	W1223 00:06:04.027528  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:04.027582  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:04.048194  687772 logs.go:282] 0 containers: []
	W1223 00:06:04.048222  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:04.048286  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:04.067020  687772 logs.go:282] 0 containers: []
	W1223 00:06:04.067044  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:04.067096  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:04.085747  687772 logs.go:282] 0 containers: []
	W1223 00:06:04.085776  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:04.085829  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:04.103906  687772 logs.go:282] 0 containers: []
	W1223 00:06:04.103936  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:04.103950  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:04.103963  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:04.131404  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:04.131427  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:04.178862  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:04.178893  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:04.198797  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:04.198823  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:04.255150  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:04.247324   18547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:04.247911   18547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:04.249519   18547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:04.249945   18547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:04.251469   18547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:04.247324   18547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:04.247911   18547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:04.249519   18547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:04.249945   18547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:04.251469   18547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:04.255174  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:04.255190  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:06.777149  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:06.788444  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:06.807818  687772 logs.go:282] 0 containers: []
	W1223 00:06:06.807839  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:06.807881  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:06.827018  687772 logs.go:282] 0 containers: []
	W1223 00:06:06.827044  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:06.827092  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:06.845320  687772 logs.go:282] 0 containers: []
	W1223 00:06:06.845342  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:06.845395  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:06.862837  687772 logs.go:282] 0 containers: []
	W1223 00:06:06.862856  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:06.862907  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:06.880629  687772 logs.go:282] 0 containers: []
	W1223 00:06:06.880649  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:06.880690  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:06.898665  687772 logs.go:282] 0 containers: []
	W1223 00:06:06.898694  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:06.898762  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:06.916571  687772 logs.go:282] 0 containers: []
	W1223 00:06:06.916606  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:06.916662  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:06.934190  687772 logs.go:282] 0 containers: []
	W1223 00:06:06.934213  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:06.934228  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:06.934245  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:06.961869  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:06.961895  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:07.008426  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:07.008460  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:07.033602  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:07.033641  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:07.089432  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:07.082227   18715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:07.082831   18715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:07.084421   18715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:07.084867   18715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:07.086345   18715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:07.082227   18715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:07.082831   18715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:07.084421   18715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:07.084867   18715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:07.086345   18715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:07.089452  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:07.089463  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:09.608089  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:09.619510  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:09.638402  687772 logs.go:282] 0 containers: []
	W1223 00:06:09.638426  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:09.638473  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:09.657218  687772 logs.go:282] 0 containers: []
	W1223 00:06:09.657247  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:09.657292  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:09.675838  687772 logs.go:282] 0 containers: []
	W1223 00:06:09.675871  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:09.675935  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:09.694913  687772 logs.go:282] 0 containers: []
	W1223 00:06:09.694939  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:09.694992  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:09.714024  687772 logs.go:282] 0 containers: []
	W1223 00:06:09.714046  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:09.714097  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:09.733120  687772 logs.go:282] 0 containers: []
	W1223 00:06:09.733142  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:09.733188  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:09.752081  687772 logs.go:282] 0 containers: []
	W1223 00:06:09.752104  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:09.752148  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:09.770630  687772 logs.go:282] 0 containers: []
	W1223 00:06:09.770661  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:09.770676  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:09.770700  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:09.818931  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:09.818967  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:09.839282  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:09.839309  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:09.895206  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:09.888285   18867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:09.888810   18867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:09.890309   18867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:09.890779   18867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:09.891942   18867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:09.888285   18867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:09.888810   18867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:09.890309   18867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:09.890779   18867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:09.891942   18867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:09.895234  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:09.895247  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:09.913965  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:09.913994  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:12.442178  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:12.453355  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:12.472243  687772 logs.go:282] 0 containers: []
	W1223 00:06:12.472267  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:12.472312  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:12.491113  687772 logs.go:282] 0 containers: []
	W1223 00:06:12.491136  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:12.491192  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:12.511291  687772 logs.go:282] 0 containers: []
	W1223 00:06:12.511317  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:12.511376  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:12.532112  687772 logs.go:282] 0 containers: []
	W1223 00:06:12.532141  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:12.532196  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:12.551226  687772 logs.go:282] 0 containers: []
	W1223 00:06:12.551250  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:12.551293  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:12.569426  687772 logs.go:282] 0 containers: []
	W1223 00:06:12.569449  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:12.569504  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:12.588494  687772 logs.go:282] 0 containers: []
	W1223 00:06:12.588520  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:12.588569  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:12.606610  687772 logs.go:282] 0 containers: []
	W1223 00:06:12.606644  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:12.606657  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:12.606674  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:12.634113  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:12.634143  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:12.681112  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:12.681140  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:12.700711  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:12.700736  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:12.757239  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:12.749780   19051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:12.750485   19051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:12.752070   19051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:12.752530   19051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:12.754079   19051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:12.749780   19051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:12.750485   19051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:12.752070   19051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:12.752530   19051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:12.754079   19051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:12.757259  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:12.757273  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:15.278124  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:15.290283  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:15.309406  687772 logs.go:282] 0 containers: []
	W1223 00:06:15.309433  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:15.309481  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:15.328093  687772 logs.go:282] 0 containers: []
	W1223 00:06:15.328119  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:15.328173  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:15.346922  687772 logs.go:282] 0 containers: []
	W1223 00:06:15.346949  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:15.347006  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:15.364932  687772 logs.go:282] 0 containers: []
	W1223 00:06:15.364960  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:15.365013  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:15.383120  687772 logs.go:282] 0 containers: []
	W1223 00:06:15.383144  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:15.383188  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:15.401332  687772 logs.go:282] 0 containers: []
	W1223 00:06:15.401355  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:15.401404  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:15.419961  687772 logs.go:282] 0 containers: []
	W1223 00:06:15.419986  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:15.420037  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:15.438746  687772 logs.go:282] 0 containers: []
	W1223 00:06:15.438769  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:15.438780  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:15.438793  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:15.486016  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:15.486044  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:15.506911  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:15.506939  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:15.566808  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:15.559320   19201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:15.559937   19201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:15.561686   19201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:15.562103   19201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:15.563650   19201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:15.559320   19201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:15.559937   19201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:15.561686   19201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:15.562103   19201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:15.563650   19201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:15.566826  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:15.566836  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:15.586013  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:15.586040  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:18.115753  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:18.127221  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:18.146018  687772 logs.go:282] 0 containers: []
	W1223 00:06:18.146048  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:18.146094  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:18.165274  687772 logs.go:282] 0 containers: []
	W1223 00:06:18.165294  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:18.165337  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:18.183880  687772 logs.go:282] 0 containers: []
	W1223 00:06:18.183904  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:18.183947  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:18.202061  687772 logs.go:282] 0 containers: []
	W1223 00:06:18.202082  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:18.202130  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:18.219858  687772 logs.go:282] 0 containers: []
	W1223 00:06:18.219892  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:18.219945  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:18.238966  687772 logs.go:282] 0 containers: []
	W1223 00:06:18.238987  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:18.239032  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:18.260921  687772 logs.go:282] 0 containers: []
	W1223 00:06:18.260949  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:18.260997  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:18.280705  687772 logs.go:282] 0 containers: []
	W1223 00:06:18.280735  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:18.280750  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:18.280764  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:18.299732  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:18.299756  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:18.327603  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:18.327631  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:18.375722  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:18.375749  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:18.397572  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:18.397611  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:18.454135  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:18.447039   19376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:18.447614   19376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:18.449142   19376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:18.449559   19376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:18.451077   19376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:18.447039   19376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:18.447614   19376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:18.449142   19376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:18.449559   19376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:18.451077   19376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:20.955833  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:20.967309  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:20.986237  687772 logs.go:282] 0 containers: []
	W1223 00:06:20.986258  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:20.986301  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:21.004350  687772 logs.go:282] 0 containers: []
	W1223 00:06:21.004377  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:21.004434  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:21.022893  687772 logs.go:282] 0 containers: []
	W1223 00:06:21.022919  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:21.022974  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:21.042421  687772 logs.go:282] 0 containers: []
	W1223 00:06:21.042441  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:21.042484  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:21.061267  687772 logs.go:282] 0 containers: []
	W1223 00:06:21.061293  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:21.061355  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:21.079988  687772 logs.go:282] 0 containers: []
	W1223 00:06:21.080011  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:21.080064  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:21.098196  687772 logs.go:282] 0 containers: []
	W1223 00:06:21.098225  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:21.098279  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:21.117158  687772 logs.go:282] 0 containers: []
	W1223 00:06:21.117180  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:21.117191  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:21.117202  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:21.146189  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:21.146215  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:21.192645  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:21.192677  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:21.212689  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:21.212716  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:21.269438  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:21.261783   19545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:21.262320   19545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:21.263990   19545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:21.264498   19545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:21.266022   19545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:21.261783   19545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:21.262320   19545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:21.263990   19545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:21.264498   19545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:21.266022   19545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:21.269462  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:21.269480  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:23.789716  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:23.801130  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:23.820155  687772 logs.go:282] 0 containers: []
	W1223 00:06:23.820180  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:23.820239  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:23.838850  687772 logs.go:282] 0 containers: []
	W1223 00:06:23.838875  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:23.838919  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:23.856860  687772 logs.go:282] 0 containers: []
	W1223 00:06:23.856881  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:23.856931  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:23.874630  687772 logs.go:282] 0 containers: []
	W1223 00:06:23.874653  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:23.874700  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:23.893425  687772 logs.go:282] 0 containers: []
	W1223 00:06:23.893454  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:23.893521  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:23.912712  687772 logs.go:282] 0 containers: []
	W1223 00:06:23.912734  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:23.912789  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:23.931097  687772 logs.go:282] 0 containers: []
	W1223 00:06:23.931124  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:23.931178  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:23.949113  687772 logs.go:282] 0 containers: []
	W1223 00:06:23.949138  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:23.949152  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:23.949168  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:23.996109  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:23.996137  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:24.016228  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:24.016254  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:24.071647  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:24.064286   19696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:24.064800   19696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:24.066333   19696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:24.066786   19696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:24.068314   19696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:24.064286   19696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:24.064800   19696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:24.066333   19696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:24.066786   19696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:24.068314   19696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:24.071665  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:24.071680  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:24.090918  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:24.090944  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:26.624354  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:26.635840  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:26.654444  687772 logs.go:282] 0 containers: []
	W1223 00:06:26.654473  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:26.654537  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:26.673364  687772 logs.go:282] 0 containers: []
	W1223 00:06:26.673388  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:26.673436  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:26.692467  687772 logs.go:282] 0 containers: []
	W1223 00:06:26.692489  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:26.692539  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:26.711627  687772 logs.go:282] 0 containers: []
	W1223 00:06:26.711656  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:26.711709  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:26.730302  687772 logs.go:282] 0 containers: []
	W1223 00:06:26.730332  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:26.730386  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:26.748910  687772 logs.go:282] 0 containers: []
	W1223 00:06:26.748939  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:26.748995  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:26.768525  687772 logs.go:282] 0 containers: []
	W1223 00:06:26.768548  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:26.768603  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:26.788434  687772 logs.go:282] 0 containers: []
	W1223 00:06:26.788462  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:26.788476  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:26.788491  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:26.845463  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:26.838499   19858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:26.838989   19858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:26.840494   19858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:26.840922   19858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:26.842389   19858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:26.838499   19858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:26.838989   19858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:26.840494   19858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:26.840922   19858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:26.842389   19858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:26.845482  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:26.845494  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:26.864140  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:26.864167  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:26.890448  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:26.890476  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:26.937390  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:26.937422  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:29.457766  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:29.469205  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:29.488353  687772 logs.go:282] 0 containers: []
	W1223 00:06:29.488376  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:29.488431  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:29.508035  687772 logs.go:282] 0 containers: []
	W1223 00:06:29.508059  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:29.508114  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:29.528210  687772 logs.go:282] 0 containers: []
	W1223 00:06:29.528234  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:29.528280  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:29.546344  687772 logs.go:282] 0 containers: []
	W1223 00:06:29.546370  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:29.546432  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:29.565125  687772 logs.go:282] 0 containers: []
	W1223 00:06:29.565153  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:29.565200  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:29.584111  687772 logs.go:282] 0 containers: []
	W1223 00:06:29.584142  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:29.584195  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:29.602714  687772 logs.go:282] 0 containers: []
	W1223 00:06:29.602735  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:29.602778  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:29.621012  687772 logs.go:282] 0 containers: []
	W1223 00:06:29.621042  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:29.621058  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:29.621073  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:29.669132  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:29.669168  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:29.689406  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:29.689431  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:29.746681  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:29.739833   20022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:29.740362   20022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:29.741927   20022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:29.742376   20022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:29.743569   20022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:29.739833   20022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:29.740362   20022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:29.741927   20022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:29.742376   20022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:29.743569   20022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:29.746703  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:29.746720  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:29.765762  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:29.765793  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:32.299443  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:32.310848  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:32.330298  687772 logs.go:282] 0 containers: []
	W1223 00:06:32.330326  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:32.330380  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:32.349664  687772 logs.go:282] 0 containers: []
	W1223 00:06:32.349692  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:32.349745  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:32.367944  687772 logs.go:282] 0 containers: []
	W1223 00:06:32.367969  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:32.368081  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:32.386919  687772 logs.go:282] 0 containers: []
	W1223 00:06:32.386940  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:32.386983  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:32.405416  687772 logs.go:282] 0 containers: []
	W1223 00:06:32.405440  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:32.405487  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:32.423080  687772 logs.go:282] 0 containers: []
	W1223 00:06:32.423100  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:32.423144  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:32.441255  687772 logs.go:282] 0 containers: []
	W1223 00:06:32.441282  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:32.441336  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:32.459763  687772 logs.go:282] 0 containers: []
	W1223 00:06:32.459789  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:32.459801  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:32.459812  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:32.507284  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:32.507314  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:32.529983  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:32.530014  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:32.587816  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:32.580635   20192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:32.581177   20192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:32.582743   20192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:32.583222   20192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:32.584764   20192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:32.580635   20192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:32.581177   20192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:32.582743   20192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:32.583222   20192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:32.584764   20192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:32.587843  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:32.587860  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:32.607796  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:32.607826  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:35.136489  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:35.147976  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:35.166774  687772 logs.go:282] 0 containers: []
	W1223 00:06:35.166794  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:35.166846  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:35.185872  687772 logs.go:282] 0 containers: []
	W1223 00:06:35.185899  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:35.185949  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:35.204053  687772 logs.go:282] 0 containers: []
	W1223 00:06:35.204074  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:35.204115  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:35.223056  687772 logs.go:282] 0 containers: []
	W1223 00:06:35.223077  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:35.223126  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:35.241616  687772 logs.go:282] 0 containers: []
	W1223 00:06:35.241645  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:35.241699  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:35.260422  687772 logs.go:282] 0 containers: []
	W1223 00:06:35.260476  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:35.260536  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:35.279168  687772 logs.go:282] 0 containers: []
	W1223 00:06:35.279192  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:35.279238  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:35.297208  687772 logs.go:282] 0 containers: []
	W1223 00:06:35.297236  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:35.297252  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:35.297267  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:35.317273  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:35.317299  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:35.374319  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:35.365790   20361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:35.367609   20361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:35.368076   20361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:35.369665   20361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:35.370105   20361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:35.365790   20361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:35.367609   20361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:35.368076   20361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:35.369665   20361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:35.370105   20361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:35.374337  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:35.374349  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:35.393025  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:35.393050  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:35.420499  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:35.420537  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:37.968117  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:37.979448  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:37.998789  687772 logs.go:282] 0 containers: []
	W1223 00:06:37.998815  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:37.998861  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:38.019815  687772 logs.go:282] 0 containers: []
	W1223 00:06:38.019847  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:38.019910  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:38.042524  687772 logs.go:282] 0 containers: []
	W1223 00:06:38.042552  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:38.042617  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:38.061464  687772 logs.go:282] 0 containers: []
	W1223 00:06:38.061489  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:38.061544  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:38.080482  687772 logs.go:282] 0 containers: []
	W1223 00:06:38.080509  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:38.080558  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:38.099189  687772 logs.go:282] 0 containers: []
	W1223 00:06:38.099215  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:38.099279  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:38.118161  687772 logs.go:282] 0 containers: []
	W1223 00:06:38.118188  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:38.118244  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:38.136752  687772 logs.go:282] 0 containers: []
	W1223 00:06:38.136786  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:38.136803  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:38.136819  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:38.182751  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:38.182779  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:38.202352  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:38.202375  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:38.257901  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:38.250382   20532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:38.251009   20532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:38.252656   20532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:38.253166   20532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:38.254694   20532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:38.250382   20532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:38.251009   20532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:38.252656   20532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:38.253166   20532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:38.254694   20532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:38.257922  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:38.257933  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:38.276963  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:38.276988  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:40.806792  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:40.818244  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:40.837324  687772 logs.go:282] 0 containers: []
	W1223 00:06:40.837348  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:40.837402  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:40.856364  687772 logs.go:282] 0 containers: []
	W1223 00:06:40.856387  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:40.856453  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:40.874753  687772 logs.go:282] 0 containers: []
	W1223 00:06:40.874780  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:40.874831  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:40.893167  687772 logs.go:282] 0 containers: []
	W1223 00:06:40.893193  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:40.893242  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:40.910901  687772 logs.go:282] 0 containers: []
	W1223 00:06:40.910924  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:40.910976  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:40.930108  687772 logs.go:282] 0 containers: []
	W1223 00:06:40.930133  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:40.930191  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:40.949021  687772 logs.go:282] 0 containers: []
	W1223 00:06:40.949047  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:40.949101  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:40.967221  687772 logs.go:282] 0 containers: []
	W1223 00:06:40.967246  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:40.967260  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:40.967276  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:40.988752  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:40.988779  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:41.048349  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:41.040501   20693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:41.041135   20693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:41.042880   20693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:41.043313   20693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:41.044930   20693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:41.040501   20693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:41.041135   20693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:41.042880   20693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:41.043313   20693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:41.044930   20693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:41.048374  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:41.048387  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:41.067112  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:41.067138  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:41.093421  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:41.093445  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:43.639363  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:43.653263  687772 out.go:203] 
	W1223 00:06:43.654345  687772 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1223 00:06:43.654374  687772 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1223 00:06:43.654383  687772 out.go:285] * Related issues:
	W1223 00:06:43.654397  687772 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1223 00:06:43.654411  687772 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1223 00:06:43.655505  687772 out.go:203] 
	
	
	==> Docker <==
	Dec 23 00:00:40 newest-cni-348344 dockerd[918]: time="2025-12-23T00:00:40.188726089Z" level=info msg="Restoring containers: start."
	Dec 23 00:00:40 newest-cni-348344 dockerd[918]: time="2025-12-23T00:00:40.201365877Z" level=info msg="Deleting nftables IPv4 rules" error="exit status 1"
	Dec 23 00:00:40 newest-cni-348344 dockerd[918]: time="2025-12-23T00:00:40.219292925Z" level=info msg="Deleting nftables IPv6 rules" error="exit status 1"
	Dec 23 00:00:40 newest-cni-348344 dockerd[918]: time="2025-12-23T00:00:40.739437960Z" level=info msg="Loading containers: done."
	Dec 23 00:00:40 newest-cni-348344 dockerd[918]: time="2025-12-23T00:00:40.749914456Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 23 00:00:40 newest-cni-348344 dockerd[918]: time="2025-12-23T00:00:40.749957470Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 23 00:00:40 newest-cni-348344 dockerd[918]: time="2025-12-23T00:00:40.750005698Z" level=info msg="Initializing buildkit"
	Dec 23 00:00:40 newest-cni-348344 dockerd[918]: time="2025-12-23T00:00:40.769087870Z" level=info msg="Completed buildkit initialization"
	Dec 23 00:00:40 newest-cni-348344 dockerd[918]: time="2025-12-23T00:00:40.777195260Z" level=info msg="Daemon has completed initialization"
	Dec 23 00:00:40 newest-cni-348344 dockerd[918]: time="2025-12-23T00:00:40.777258358Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 23 00:00:40 newest-cni-348344 dockerd[918]: time="2025-12-23T00:00:40.777420063Z" level=info msg="API listen on /run/docker.sock"
	Dec 23 00:00:40 newest-cni-348344 dockerd[918]: time="2025-12-23T00:00:40.777484333Z" level=info msg="API listen on [::]:2376"
	Dec 23 00:00:40 newest-cni-348344 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 23 00:00:41 newest-cni-348344 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 23 00:00:41 newest-cni-348344 cri-dockerd[1209]: time="2025-12-23T00:00:41Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 23 00:00:41 newest-cni-348344 cri-dockerd[1209]: time="2025-12-23T00:00:41Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 23 00:00:41 newest-cni-348344 cri-dockerd[1209]: time="2025-12-23T00:00:41Z" level=info msg="Start docker client with request timeout 0s"
	Dec 23 00:00:41 newest-cni-348344 cri-dockerd[1209]: time="2025-12-23T00:00:41Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 23 00:00:41 newest-cni-348344 cri-dockerd[1209]: time="2025-12-23T00:00:41Z" level=info msg="Loaded network plugin cni"
	Dec 23 00:00:41 newest-cni-348344 cri-dockerd[1209]: time="2025-12-23T00:00:41Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 23 00:00:41 newest-cni-348344 cri-dockerd[1209]: time="2025-12-23T00:00:41Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 23 00:00:41 newest-cni-348344 cri-dockerd[1209]: time="2025-12-23T00:00:41Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 23 00:00:41 newest-cni-348344 cri-dockerd[1209]: time="2025-12-23T00:00:41Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 23 00:00:41 newest-cni-348344 cri-dockerd[1209]: time="2025-12-23T00:00:41Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 23 00:00:41 newest-cni-348344 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:50.508401   21301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:50.509056   21301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:50.510686   21301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:50.511129   21301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:50.512670   21301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 32 44 b0 85 99 75 08 06
	[  +2.519484] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ca 64 f4 88 60 6a 08 06
	[  +0.000472] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 42 41 81 ba 80 a4 08 06
	[Dec22 23:59] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e 60 1e 9e f0 0c 08 06
	[  +0.088099] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff f6 12 57 26 ed f1 08 06
	[  +5.341024] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 46 24 97 27 5a ed 08 06
	[ +14.537406] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff da 72 df 3b 35 8d 08 06
	[  +0.000388] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 5e 60 1e 9e f0 0c 08 06
	[  +2.465032] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 84 3f 6a 28 22 08 06
	[  +0.000373] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 46 24 97 27 5a ed 08 06
	[Dec23 00:00] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 4e 53 f0 1e af dd 08 06
	[Dec23 00:01] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 20 71 68 66 a5 08 06
	[  +0.000346] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 4e 53 f0 1e af dd 08 06
	
	
	==> kernel <==
	 00:06:50 up  3:49,  0 user,  load average: 0.49, 1.43, 1.66
	Linux newest-cni-348344 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 23 00:06:46 newest-cni-348344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 23 00:06:46 newest-cni-348344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 23 00:06:46 newest-cni-348344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 487.
	Dec 23 00:06:46 newest-cni-348344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 23 00:06:46 newest-cni-348344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 23 00:06:47 newest-cni-348344 kubelet[20971]: E1223 00:06:47.037246   20971 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 23 00:06:47 newest-cni-348344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 23 00:06:47 newest-cni-348344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 23 00:06:47 newest-cni-348344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 23 00:06:48 newest-cni-348344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 23 00:06:48 newest-cni-348344 kubelet[21108]: E1223 00:06:48.540941   21108 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 23 00:06:48 newest-cni-348344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 23 00:06:48 newest-cni-348344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 23 00:06:49 newest-cni-348344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	Dec 23 00:06:49 newest-cni-348344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 23 00:06:49 newest-cni-348344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 23 00:06:49 newest-cni-348344 kubelet[21151]: E1223 00:06:49.287119   21151 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 23 00:06:49 newest-cni-348344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 23 00:06:49 newest-cni-348344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 23 00:06:49 newest-cni-348344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
	Dec 23 00:06:49 newest-cni-348344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 23 00:06:49 newest-cni-348344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 23 00:06:50 newest-cni-348344 kubelet[21177]: E1223 00:06:50.048621   21177 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 23 00:06:50 newest-cni-348344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 23 00:06:50 newest-cni-348344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-348344 -n newest-cni-348344
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-348344 -n newest-cni-348344: exit status 2 (304.142966ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "newest-cni-348344" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-348344
helpers_test.go:244: (dbg) docker inspect newest-cni-348344:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "133dc19d84d424ed179e624a54285c88a37ad637a1692732b3536ec0f181551b",
	        "Created": "2025-12-22T23:50:45.124975619Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 687974,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-23T00:00:34.301956639Z",
	            "FinishedAt": "2025-12-23T00:00:32.890201351Z"
	        },
	        "Image": "sha256:9a87e850a5e640dd3e5f71477885272b970ba271e3722be8bebbe0157f704ffd",
	        "ResolvConfPath": "/var/lib/docker/containers/133dc19d84d424ed179e624a54285c88a37ad637a1692732b3536ec0f181551b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/133dc19d84d424ed179e624a54285c88a37ad637a1692732b3536ec0f181551b/hostname",
	        "HostsPath": "/var/lib/docker/containers/133dc19d84d424ed179e624a54285c88a37ad637a1692732b3536ec0f181551b/hosts",
	        "LogPath": "/var/lib/docker/containers/133dc19d84d424ed179e624a54285c88a37ad637a1692732b3536ec0f181551b/133dc19d84d424ed179e624a54285c88a37ad637a1692732b3536ec0f181551b-json.log",
	        "Name": "/newest-cni-348344",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-348344:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-348344",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "133dc19d84d424ed179e624a54285c88a37ad637a1692732b3536ec0f181551b",
	                "LowerDir": "/var/lib/docker/overlay2/6020e8f517a187af8c88e3692b2c53fcf5fcbaeb46fc7b99af192b869c28d41a-init/diff:/var/lib/docker/overlay2/c57dd1a41102d99c4ed6be3c60b871435428bd2cea6a3d8d172f0a67527ba009/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6020e8f517a187af8c88e3692b2c53fcf5fcbaeb46fc7b99af192b869c28d41a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6020e8f517a187af8c88e3692b2c53fcf5fcbaeb46fc7b99af192b869c28d41a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6020e8f517a187af8c88e3692b2c53fcf5fcbaeb46fc7b99af192b869c28d41a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-348344",
	                "Source": "/var/lib/docker/volumes/newest-cni-348344/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-348344",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-348344",
	                "name.minikube.sigs.k8s.io": "newest-cni-348344",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "1d6c9b4cbbb98d27f15b901c20b574a86c3cb628ad2da992c2e0c5437cff03b0",
	            "SandboxKey": "/var/run/docker/netns/1d6c9b4cbbb9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33168"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33169"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33172"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33170"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33171"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-348344": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1020bfe2df349af00e9e2f4197eff27d709a25503c20a26c662019055cba21bb",
	                    "EndpointID": "66b6b308d2bcc6eca28baac06e33fe8d42bbea1f9fe8f1f5ee1a462ebfeba9bc",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "12:46:4e:43:ff:87",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-348344",
	                        "133dc19d84d4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-348344 -n newest-cni-348344
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-348344 -n newest-cni-348344: exit status 2 (295.11893ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-348344 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-348344 logs -n 25: (1.164640454s)
E1223 00:06:53.430642   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/old-k8s-version-687073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                      ARGS                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p kubenet-003676 sudo journalctl -xeu kubelet --all --full --no-pager          │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo cat /etc/kubernetes/kubelet.conf                         │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo cat /var/lib/kubelet/config.yaml                         │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo systemctl status docker --all --full --no-pager          │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo systemctl cat docker --no-pager                          │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo cat /etc/docker/daemon.json                              │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo docker system info                                       │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo systemctl status cri-docker --all --full --no-pager      │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo systemctl cat cri-docker --no-pager                      │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo cat /usr/lib/systemd/system/cri-docker.service           │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo cri-dockerd --version                                    │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo systemctl status containerd --all --full --no-pager      │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo systemctl cat containerd --no-pager                      │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo cat /lib/systemd/system/containerd.service               │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo cat /etc/containerd/config.toml                          │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo containerd config dump                                   │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo systemctl status crio --all --full --no-pager            │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │                     │
	│ ssh     │ -p kubenet-003676 sudo systemctl cat crio --no-pager                            │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;  │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo crio config                                              │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ delete  │ -p kubenet-003676                                                               │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ image   │ newest-cni-348344 image list --format=json                                      │ newest-cni-348344 │ jenkins │ v1.37.0 │ 23 Dec 25 00:06 UTC │ 23 Dec 25 00:06 UTC │
	│ pause   │ -p newest-cni-348344 --alsologtostderr -v=1                                     │ newest-cni-348344 │ jenkins │ v1.37.0 │ 23 Dec 25 00:06 UTC │ 23 Dec 25 00:06 UTC │
	│ unpause │ -p newest-cni-348344 --alsologtostderr -v=1                                     │ newest-cni-348344 │ jenkins │ v1.37.0 │ 23 Dec 25 00:06 UTC │ 23 Dec 25 00:06 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/23 00:00:34
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1223 00:00:34.066824  687772 out.go:360] Setting OutFile to fd 1 ...
	I1223 00:00:34.067051  687772 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1223 00:00:34.067058  687772 out.go:374] Setting ErrFile to fd 2...
	I1223 00:00:34.067063  687772 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1223 00:00:34.067257  687772 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-72233/.minikube/bin
	I1223 00:00:34.067701  687772 out.go:368] Setting JSON to false
	I1223 00:00:34.068753  687772 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":13374,"bootTime":1766434660,"procs":281,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1223 00:00:34.068805  687772 start.go:143] virtualization: kvm guest
	W1223 00:00:29.964565  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:31.965119  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:33.965297  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	I1223 00:00:34.070524  687772 out.go:179] * [newest-cni-348344] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1223 00:00:34.072192  687772 notify.go:221] Checking for updates...
	I1223 00:00:34.072201  687772 out.go:179]   - MINIKUBE_LOCATION=22301
	I1223 00:00:34.073912  687772 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1223 00:00:34.074996  687772 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1223 00:00:34.076047  687772 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-72233/.minikube
	I1223 00:00:34.077175  687772 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1223 00:00:34.078295  687772 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1223 00:00:34.079882  687772 config.go:182] Loaded profile config "newest-cni-348344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1223 00:00:34.080446  687772 driver.go:422] Setting default libvirt URI to qemu:///system
	I1223 00:00:34.106101  687772 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1223 00:00:34.106213  687772 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1223 00:00:34.161275  687772 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:64 SystemTime:2025-12-23 00:00:34.151129133 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1223 00:00:34.161373  687772 docker.go:319] overlay module found
	I1223 00:00:34.163775  687772 out.go:179] * Using the docker driver based on existing profile
	I1223 00:00:34.164711  687772 start.go:309] selected driver: docker
	I1223 00:00:34.164723  687772 start.go:928] validating driver "docker" against &{Name:newest-cni-348344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-348344 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1223 00:00:34.164829  687772 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1223 00:00:34.165640  687772 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1223 00:00:34.231050  687772 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:64 SystemTime:2025-12-23 00:00:34.221418272 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1223 00:00:34.231362  687772 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1223 00:00:34.231388  687772 cni.go:84] Creating CNI manager for ""
	I1223 00:00:34.231454  687772 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1223 00:00:34.231489  687772 start.go:353] cluster config:
	{Name:newest-cni-348344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-348344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1223 00:00:34.233188  687772 out.go:179] * Starting "newest-cni-348344" primary control-plane node in "newest-cni-348344" cluster
	I1223 00:00:34.234320  687772 cache.go:134] Beginning downloading kic base image for docker with docker
	I1223 00:00:34.235503  687772 out.go:179] * Pulling base image v0.0.48-1766394456-22288 ...
	I1223 00:00:34.236442  687772 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1223 00:00:34.236471  687772 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4
	I1223 00:00:34.236486  687772 cache.go:65] Caching tarball of preloaded images
	I1223 00:00:34.236540  687772 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local docker daemon
	I1223 00:00:34.236575  687772 preload.go:251] Found /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1223 00:00:34.236586  687772 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on docker
	I1223 00:00:34.236749  687772 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/config.json ...
	I1223 00:00:34.256540  687772 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local docker daemon, skipping pull
	I1223 00:00:34.256557  687772 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 exists in daemon, skipping load
	I1223 00:00:34.256572  687772 cache.go:243] Successfully downloaded all kic artifacts
	I1223 00:00:34.256623  687772 start.go:360] acquireMachinesLock for newest-cni-348344: {Name:mk26cd248e0bcd2d8f2e8a824868ba7de6c9c6f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1223 00:00:34.256695  687772 start.go:364] duration metric: took 39.918µs to acquireMachinesLock for "newest-cni-348344"
	I1223 00:00:34.256714  687772 start.go:96] Skipping create...Using existing machine configuration
	I1223 00:00:34.256719  687772 fix.go:54] fixHost starting: 
	I1223 00:00:34.256918  687772 cli_runner.go:164] Run: docker container inspect newest-cni-348344 --format={{.State.Status}}
	I1223 00:00:34.273955  687772 fix.go:112] recreateIfNeeded on newest-cni-348344: state=Stopped err=<nil>
	W1223 00:00:34.273978  687772 fix.go:138] unexpected machine state, will restart: <nil>
	W1223 00:00:31.998314  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:34.497435  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:36.498048  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:00:34.276035  687772 out.go:252] * Restarting existing docker container for "newest-cni-348344" ...
	I1223 00:00:34.276100  687772 cli_runner.go:164] Run: docker start newest-cni-348344
	I1223 00:00:34.520995  687772 cli_runner.go:164] Run: docker container inspect newest-cni-348344 --format={{.State.Status}}
	I1223 00:00:34.540249  687772 kic.go:430] container "newest-cni-348344" state is running.
	I1223 00:00:34.540736  687772 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-348344
	I1223 00:00:34.560319  687772 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/config.json ...
	I1223 00:00:34.560718  687772 machine.go:94] provisionDockerMachine start ...
	I1223 00:00:34.560825  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:34.581907  687772 main.go:144] libmachine: Using SSH client type: native
	I1223 00:00:34.582194  687772 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33168 <nil> <nil>}
	I1223 00:00:34.582211  687772 main.go:144] libmachine: About to run SSH command:
	hostname
	I1223 00:00:34.583095  687772 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41172->127.0.0.1:33168: read: connection reset by peer
	I1223 00:00:37.726578  687772 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-348344
	
	I1223 00:00:37.726621  687772 ubuntu.go:182] provisioning hostname "newest-cni-348344"
	I1223 00:00:37.726764  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:37.746947  687772 main.go:144] libmachine: Using SSH client type: native
	I1223 00:00:37.747183  687772 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33168 <nil> <nil>}
	I1223 00:00:37.747203  687772 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-348344 && echo "newest-cni-348344" | sudo tee /etc/hostname
	I1223 00:00:37.900818  687772 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-348344
	
	I1223 00:00:37.900900  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:37.919317  687772 main.go:144] libmachine: Using SSH client type: native
	I1223 00:00:37.919561  687772 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33168 <nil> <nil>}
	I1223 00:00:37.919579  687772 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-348344' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-348344/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-348344' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1223 00:00:38.062239  687772 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1223 00:00:38.062284  687772 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22301-72233/.minikube CaCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22301-72233/.minikube}
	I1223 00:00:38.062331  687772 ubuntu.go:190] setting up certificates
	I1223 00:00:38.062344  687772 provision.go:84] configureAuth start
	I1223 00:00:38.062400  687772 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-348344
	I1223 00:00:38.081263  687772 provision.go:143] copyHostCerts
	I1223 00:00:38.081355  687772 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem, removing ...
	I1223 00:00:38.081386  687772 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem
	I1223 00:00:38.081497  687772 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem (1082 bytes)
	I1223 00:00:38.081760  687772 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem, removing ...
	I1223 00:00:38.081785  687772 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem
	I1223 00:00:38.081851  687772 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem (1123 bytes)
	I1223 00:00:38.082007  687772 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem, removing ...
	I1223 00:00:38.082027  687772 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem
	I1223 00:00:38.082101  687772 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem (1679 bytes)
	I1223 00:00:38.082238  687772 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem org=jenkins.newest-cni-348344 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-348344]
	I1223 00:00:38.170695  687772 provision.go:177] copyRemoteCerts
	I1223 00:00:38.170759  687772 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1223 00:00:38.170815  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:38.189123  687772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/newest-cni-348344/id_rsa Username:docker}
	I1223 00:00:38.291920  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1223 00:00:38.309166  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1223 00:00:38.326166  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1223 00:00:38.344564  687772 provision.go:87] duration metric: took 282.199681ms to configureAuth
	I1223 00:00:38.344627  687772 ubuntu.go:206] setting minikube options for container-runtime
	I1223 00:00:38.344911  687772 config.go:182] Loaded profile config "newest-cni-348344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1223 00:00:38.344995  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:38.366260  687772 main.go:144] libmachine: Using SSH client type: native
	I1223 00:00:38.366529  687772 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33168 <nil> <nil>}
	I1223 00:00:38.366545  687772 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1223 00:00:38.510728  687772 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1223 00:00:38.510754  687772 ubuntu.go:71] root file system type: overlay
	I1223 00:00:38.510908  687772 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1223 00:00:38.510979  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:38.532018  687772 main.go:144] libmachine: Using SSH client type: native
	I1223 00:00:38.532329  687772 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33168 <nil> <nil>}
	I1223 00:00:38.532458  687772 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1223 00:00:38.686369  687772 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1223 00:00:38.686443  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:38.704974  687772 main.go:144] libmachine: Using SSH client type: native
	I1223 00:00:38.705187  687772 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33168 <nil> <nil>}
	I1223 00:00:38.705204  687772 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1223 00:00:38.853249  687772 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1223 00:00:38.853285  687772 machine.go:97] duration metric: took 4.292539002s to provisionDockerMachine
	I1223 00:00:38.853303  687772 start.go:293] postStartSetup for "newest-cni-348344" (driver="docker")
	I1223 00:00:38.853321  687772 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1223 00:00:38.853419  687772 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1223 00:00:38.853487  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:38.872421  687772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/newest-cni-348344/id_rsa Username:docker}
	I1223 00:00:38.979494  687772 ssh_runner.go:195] Run: cat /etc/os-release
	I1223 00:00:38.983309  687772 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1223 00:00:38.983340  687772 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1223 00:00:38.983352  687772 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-72233/.minikube/addons for local assets ...
	I1223 00:00:38.983401  687772 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-72233/.minikube/files for local assets ...
	I1223 00:00:38.983478  687772 filesync.go:149] local asset: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem -> 758032.pem in /etc/ssl/certs
	I1223 00:00:38.983566  687772 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1223 00:00:38.991328  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem --> /etc/ssl/certs/758032.pem (1708 bytes)
	I1223 00:00:39.009038  687772 start.go:296] duration metric: took 155.718049ms for postStartSetup
	I1223 00:00:39.009116  687772 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1223 00:00:39.009156  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:39.028095  687772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/newest-cni-348344/id_rsa Username:docker}
	W1223 00:00:36.464751  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:38.465798  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	I1223 00:00:39.126878  687772 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1223 00:00:39.131527  687772 fix.go:56] duration metric: took 4.874800463s for fixHost
	I1223 00:00:39.131555  687772 start.go:83] releasing machines lock for "newest-cni-348344", held for 4.874847678s
	I1223 00:00:39.131653  687772 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-348344
	I1223 00:00:39.149572  687772 ssh_runner.go:195] Run: cat /version.json
	I1223 00:00:39.149644  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:39.149684  687772 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1223 00:00:39.149791  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:39.169897  687772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/newest-cni-348344/id_rsa Username:docker}
	I1223 00:00:39.170262  687772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/newest-cni-348344/id_rsa Username:docker}
	I1223 00:00:39.329086  687772 ssh_runner.go:195] Run: systemctl --version
	I1223 00:00:39.336405  687772 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1223 00:00:39.341291  687772 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1223 00:00:39.341348  687772 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1223 00:00:39.349279  687772 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1223 00:00:39.349306  687772 start.go:496] detecting cgroup driver to use...
	I1223 00:00:39.349351  687772 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1223 00:00:39.349509  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1223 00:00:39.363512  687772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1223 00:00:39.372815  687772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1223 00:00:39.381448  687772 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1223 00:00:39.381508  687772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1223 00:00:39.390109  687772 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1223 00:00:39.399162  687772 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1223 00:00:39.408060  687772 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1223 00:00:39.416584  687772 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1223 00:00:39.424711  687772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1223 00:00:39.433432  687772 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1223 00:00:39.442056  687772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1223 00:00:39.450905  687772 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1223 00:00:39.458512  687772 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1223 00:00:39.466991  687772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1223 00:00:39.550442  687772 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1223 00:00:39.632902  687772 start.go:496] detecting cgroup driver to use...
	I1223 00:00:39.632954  687772 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1223 00:00:39.633002  687772 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1223 00:00:39.646974  687772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1223 00:00:39.659240  687772 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1223 00:00:39.675979  687772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1223 00:00:39.688406  687772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1223 00:00:39.700912  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1223 00:00:39.715235  687772 ssh_runner.go:195] Run: which cri-dockerd
	I1223 00:00:39.718945  687772 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1223 00:00:39.726972  687772 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1223 00:00:39.739559  687772 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1223 00:00:39.825824  687772 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1223 00:00:39.909620  687772 docker.go:578] configuring docker to use "cgroupfs" as cgroup driver...
	I1223 00:00:39.909743  687772 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1223 00:00:39.923453  687772 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1223 00:00:39.935401  687772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1223 00:00:40.031388  687772 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1223 00:00:40.779801  687772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1223 00:00:40.794700  687772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1223 00:00:40.807727  687772 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1223 00:00:40.821730  687772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1223 00:00:40.835038  687772 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1223 00:00:40.917554  687772 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1223 00:00:41.006720  687772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1223 00:00:41.090943  687772 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1223 00:00:41.119047  687772 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1223 00:00:41.131277  687772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1223 00:00:41.215437  687772 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1223 00:00:41.283622  687772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1223 00:00:41.298485  687772 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1223 00:00:41.298551  687772 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1223 00:00:41.302554  687772 start.go:564] Will wait 60s for crictl version
	I1223 00:00:41.302645  687772 ssh_runner.go:195] Run: which crictl
	I1223 00:00:41.306307  687772 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1223 00:00:41.332425  687772 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1223 00:00:41.332495  687772 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1223 00:00:41.358777  687772 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1223 00:00:41.385668  687772 out.go:252] * Preparing Kubernetes v1.35.0-rc.1 on Docker 29.1.3 ...
	I1223 00:00:41.385749  687772 cli_runner.go:164] Run: docker network inspect newest-cni-348344 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1223 00:00:41.403696  687772 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1223 00:00:41.407961  687772 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1223 00:00:41.419714  687772 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1223 00:00:38.997749  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:40.998211  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:00:41.420647  687772 kubeadm.go:884] updating cluster {Name:newest-cni-348344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-348344 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1223 00:00:41.420848  687772 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1223 00:00:41.420937  687772 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1223 00:00:41.444049  687772 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1223 00:00:41.444075  687772 docker.go:624] Images already preloaded, skipping extraction
	I1223 00:00:41.444128  687772 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1223 00:00:41.466181  687772 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1223 00:00:41.466206  687772 cache_images.go:86] Images are preloaded, skipping loading
	I1223 00:00:41.466215  687772 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0-rc.1 docker true true} ...
	I1223 00:00:41.466312  687772 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-348344 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-348344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1223 00:00:41.466372  687772 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1223 00:00:41.520065  687772 cni.go:84] Creating CNI manager for ""
	I1223 00:00:41.520097  687772 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1223 00:00:41.520116  687772 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1223 00:00:41.520151  687772 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-348344 NodeName:newest-cni-348344 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1223 00:00:41.520280  687772 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-348344"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1223 00:00:41.520348  687772 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1223 00:00:41.529909  687772 binaries.go:51] Found k8s binaries, skipping transfer
	I1223 00:00:41.529987  687772 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1223 00:00:41.538670  687772 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1223 00:00:41.552566  687772 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1223 00:00:41.565766  687772 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1223 00:00:41.578604  687772 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1223 00:00:41.582388  687772 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1223 00:00:41.592239  687772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1223 00:00:41.689716  687772 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1223 00:00:41.716265  687772 certs.go:69] Setting up /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344 for IP: 192.168.94.2
	I1223 00:00:41.716293  687772 certs.go:195] generating shared ca certs ...
	I1223 00:00:41.716315  687772 certs.go:227] acquiring lock for ca certs: {Name:mk952cc8302daab7c0050aedd5db4002f6808128 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1223 00:00:41.716492  687772 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.key
	I1223 00:00:41.716548  687772 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.key
	I1223 00:00:41.716557  687772 certs.go:257] generating profile certs ...
	I1223 00:00:41.716731  687772 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/client.key
	I1223 00:00:41.716814  687772 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/apiserver.key.3654ac73
	I1223 00:00:41.716864  687772 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/proxy-client.key
	I1223 00:00:41.716992  687772 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803.pem (1338 bytes)
	W1223 00:00:41.717032  687772 certs.go:480] ignoring /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803_empty.pem, impossibly tiny 0 bytes
	I1223 00:00:41.717041  687772 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem (1675 bytes)
	I1223 00:00:41.717076  687772 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem (1082 bytes)
	I1223 00:00:41.717110  687772 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem (1123 bytes)
	I1223 00:00:41.717142  687772 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem (1679 bytes)
	I1223 00:00:41.717206  687772 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem (1708 bytes)
	I1223 00:00:41.718210  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1223 00:00:41.739858  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1223 00:00:41.759304  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1223 00:00:41.777433  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1223 00:00:41.794878  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1223 00:00:41.811695  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1223 00:00:41.829786  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1223 00:00:41.846666  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1223 00:00:41.863546  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem --> /usr/share/ca-certificates/758032.pem (1708 bytes)
	I1223 00:00:41.880762  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1223 00:00:41.898275  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803.pem --> /usr/share/ca-certificates/75803.pem (1338 bytes)
	I1223 00:00:41.915259  687772 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1223 00:00:41.927541  687772 ssh_runner.go:195] Run: openssl version
	I1223 00:00:41.933577  687772 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1223 00:00:41.940758  687772 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1223 00:00:41.948346  687772 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1223 00:00:41.952096  687772 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 22:33 /usr/share/ca-certificates/minikubeCA.pem
	I1223 00:00:41.952140  687772 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1223 00:00:41.987400  687772 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1223 00:00:41.995158  687772 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/75803.pem
	I1223 00:00:42.002657  687772 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/75803.pem /etc/ssl/certs/75803.pem
	I1223 00:00:42.010346  687772 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75803.pem
	I1223 00:00:42.014050  687772 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 22:42 /usr/share/ca-certificates/75803.pem
	I1223 00:00:42.014097  687772 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75803.pem
	I1223 00:00:42.048067  687772 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1223 00:00:42.055784  687772 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/758032.pem
	I1223 00:00:42.063393  687772 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/758032.pem /etc/ssl/certs/758032.pem
	I1223 00:00:42.071081  687772 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/758032.pem
	I1223 00:00:42.075446  687772 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 22:42 /usr/share/ca-certificates/758032.pem
	I1223 00:00:42.075515  687772 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/758032.pem
	I1223 00:00:42.110438  687772 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1223 00:00:42.118365  687772 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1223 00:00:42.122225  687772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1223 00:00:42.157294  687772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1223 00:00:42.192810  687772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1223 00:00:42.226497  687772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1223 00:00:42.261017  687772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1223 00:00:42.299910  687772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1223 00:00:42.335335  687772 kubeadm.go:401] StartCluster: {Name:newest-cni-348344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-348344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1223 00:00:42.335492  687772 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1223 00:00:42.355576  687772 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1223 00:00:42.364792  687772 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1223 00:00:42.364813  687772 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1223 00:00:42.364868  687772 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1223 00:00:42.373141  687772 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1223 00:00:42.374088  687772 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-348344" does not appear in /home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1223 00:00:42.374674  687772 kubeconfig.go:62] /home/jenkins/minikube-integration/22301-72233/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-348344" cluster setting kubeconfig missing "newest-cni-348344" context setting]
	I1223 00:00:42.375665  687772 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/kubeconfig: {Name:mkabb5ea92c3fe748f610038efb5c58128364c71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1223 00:00:42.377667  687772 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1223 00:00:42.385784  687772 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1223 00:00:42.385817  687772 kubeadm.go:602] duration metric: took 20.99658ms to restartPrimaryControlPlane
	I1223 00:00:42.385829  687772 kubeadm.go:403] duration metric: took 50.508354ms to StartCluster
	I1223 00:00:42.385848  687772 settings.go:142] acquiring lock: {Name:mk05aa406dacdbba79fec0b7e7f355491ea46bf8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1223 00:00:42.385918  687772 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1223 00:00:42.387577  687772 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/kubeconfig: {Name:mkabb5ea92c3fe748f610038efb5c58128364c71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1223 00:00:42.387909  687772 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1223 00:00:42.388038  687772 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1223 00:00:42.388131  687772 config.go:182] Loaded profile config "newest-cni-348344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1223 00:00:42.388136  687772 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-348344"
	I1223 00:00:42.388154  687772 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-348344"
	I1223 00:00:42.388170  687772 addons.go:70] Setting dashboard=true in profile "newest-cni-348344"
	I1223 00:00:42.388189  687772 addons.go:70] Setting default-storageclass=true in profile "newest-cni-348344"
	I1223 00:00:42.388208  687772 addons.go:239] Setting addon dashboard=true in "newest-cni-348344"
	W1223 00:00:42.388222  687772 addons.go:248] addon dashboard should already be in state true
	I1223 00:00:42.388226  687772 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-348344"
	I1223 00:00:42.388262  687772 host.go:66] Checking if "newest-cni-348344" exists ...
	I1223 00:00:42.388184  687772 host.go:66] Checking if "newest-cni-348344" exists ...
	I1223 00:00:42.388611  687772 cli_runner.go:164] Run: docker container inspect newest-cni-348344 --format={{.State.Status}}
	I1223 00:00:42.388764  687772 cli_runner.go:164] Run: docker container inspect newest-cni-348344 --format={{.State.Status}}
	I1223 00:00:42.388771  687772 cli_runner.go:164] Run: docker container inspect newest-cni-348344 --format={{.State.Status}}
	I1223 00:00:42.390679  687772 out.go:179] * Verifying Kubernetes components...
	I1223 00:00:42.391709  687772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1223 00:00:42.411960  687772 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1223 00:00:42.411964  687772 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1223 00:00:42.412088  687772 addons.go:239] Setting addon default-storageclass=true in "newest-cni-348344"
	I1223 00:00:42.412122  687772 host.go:66] Checking if "newest-cni-348344" exists ...
	I1223 00:00:42.412456  687772 cli_runner.go:164] Run: docker container inspect newest-cni-348344 --format={{.State.Status}}
	I1223 00:00:42.413023  687772 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1223 00:00:42.413043  687772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1223 00:00:42.413094  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:42.416720  687772 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1223 00:00:42.418014  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1223 00:00:42.418038  687772 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1223 00:00:42.418101  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:42.435878  687772 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1223 00:00:42.435898  687772 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1223 00:00:42.435951  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:42.437317  687772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/newest-cni-348344/id_rsa Username:docker}
	I1223 00:00:42.440138  687772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/newest-cni-348344/id_rsa Username:docker}
	I1223 00:00:42.468807  687772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/newest-cni-348344/id_rsa Username:docker}
	I1223 00:00:42.547260  687772 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1223 00:00:42.600477  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1223 00:00:42.600501  687772 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1223 00:00:42.601363  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1223 00:00:42.607651  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1223 00:00:42.614184  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1223 00:00:42.614204  687772 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1223 00:00:42.628125  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1223 00:00:42.628154  687772 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1223 00:00:42.643093  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1223 00:00:42.643121  687772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1223 00:00:42.702024  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1223 00:00:42.702052  687772 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1223 00:00:42.716211  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1223 00:00:42.716238  687772 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1223 00:00:42.729547  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1223 00:00:42.729569  687772 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1223 00:00:42.742218  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1223 00:00:42.742246  687772 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1223 00:00:42.754716  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1223 00:00:42.754741  687772 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1223 00:00:42.767403  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1223 00:00:43.251127  687772 api_server.go:52] waiting for apiserver process to appear ...
	W1223 00:00:43.251190  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1223 00:00:43.251231  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:43.251243  687772 retry.go:84] will retry after 100ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:43.251206  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:00:43.251536  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:43.392258  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:00:43.445582  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:43.478824  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1223 00:00:43.509277  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:00:43.537617  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1223 00:00:43.565023  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:43.661224  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:00:43.715310  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:43.751478  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:44.004029  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:00:44.062136  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1223 00:00:40.965708  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:43.465160  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:43.498161  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:45.997960  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:00:44.088452  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:00:44.143262  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:44.251335  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:44.383985  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1223 00:00:44.396693  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:00:44.439768  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1223 00:00:44.455552  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:44.752205  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:44.902917  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:00:44.962468  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:45.251805  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:45.326701  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:00:45.400433  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:45.424647  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:00:45.480956  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:45.751310  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:45.799387  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:00:45.854148  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:46.251658  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:46.752208  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:46.836006  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:00:46.890186  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:46.995326  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:00:47.057423  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:47.251702  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:47.426474  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:00:47.485952  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:47.752131  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:48.207567  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1223 00:00:48.251315  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:00:48.262862  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:48.519836  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:00:48.578806  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:48.752112  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:00:45.465342  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:47.465739  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:47.998170  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:50.498122  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:00:49.251876  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:49.751844  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:49.890160  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:00:49.947020  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:50.066047  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:00:50.120129  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:50.251336  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:50.558241  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:00:50.615271  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:50.751572  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:51.252351  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:51.751913  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:52.251786  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:52.752327  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:52.791985  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:00:52.850108  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:53.251446  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:53.751646  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:00:49.964544  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:51.964692  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:53.964842  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:52.997659  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:54.998340  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:00:54.252244  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:54.751444  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:55.251848  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:55.347206  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:00:55.401615  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:55.751830  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:55.936027  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:00:55.995088  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:56.251432  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:56.751824  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:57.111686  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:00:57.169648  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:57.251735  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:57.751676  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:58.251279  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:58.751377  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:00:55.965091  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:58.465307  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	I1223 00:00:59.465067  679852 pod_ready.go:94] pod "coredns-66bc5c9577-v4sr7" is "Ready"
	I1223 00:00:59.465093  679852 pod_ready.go:86] duration metric: took 31.505726579s for pod "coredns-66bc5c9577-v4sr7" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:00:59.467499  679852 pod_ready.go:83] waiting for pod "etcd-kubenet-003676" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:00:59.471040  679852 pod_ready.go:94] pod "etcd-kubenet-003676" is "Ready"
	I1223 00:00:59.471063  679852 pod_ready.go:86] duration metric: took 3.544638ms for pod "etcd-kubenet-003676" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:00:59.472907  679852 pod_ready.go:83] waiting for pod "kube-apiserver-kubenet-003676" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:00:59.476385  679852 pod_ready.go:94] pod "kube-apiserver-kubenet-003676" is "Ready"
	I1223 00:00:59.476406  679852 pod_ready.go:86] duration metric: took 3.481083ms for pod "kube-apiserver-kubenet-003676" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:00:59.478385  679852 pod_ready.go:83] waiting for pod "kube-controller-manager-kubenet-003676" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:00:59.663149  679852 pod_ready.go:94] pod "kube-controller-manager-kubenet-003676" is "Ready"
	I1223 00:00:59.663178  679852 pod_ready.go:86] duration metric: took 184.769862ms for pod "kube-controller-manager-kubenet-003676" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:00:59.863586  679852 pod_ready.go:83] waiting for pod "kube-proxy-4ftjm" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:01:00.263634  679852 pod_ready.go:94] pod "kube-proxy-4ftjm" is "Ready"
	I1223 00:01:00.263661  679852 pod_ready.go:86] duration metric: took 400.030267ms for pod "kube-proxy-4ftjm" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:01:00.464316  679852 pod_ready.go:83] waiting for pod "kube-scheduler-kubenet-003676" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:01:00.863672  679852 pod_ready.go:94] pod "kube-scheduler-kubenet-003676" is "Ready"
	I1223 00:01:00.863704  679852 pod_ready.go:86] duration metric: took 399.359894ms for pod "kube-scheduler-kubenet-003676" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:01:00.863716  679852 pod_ready.go:40] duration metric: took 32.907880274s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1223 00:01:00.909769  679852 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1223 00:01:00.911549  679852 out.go:179] * Done! kubectl is now configured to use "kubenet-003676" cluster and "default" namespace by default
	W1223 00:00:57.497653  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:59.497943  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:00:59.251660  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:59.751533  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:00.252079  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:00.347519  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:01:00.405959  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:00.406017  687772 retry.go:84] will retry after 5.4s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:00.564176  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:01:00.620480  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:00.751684  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:01.251318  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:01.751824  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:02.252159  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:02.751813  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:03.251679  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:03.752247  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:01:01.997413  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:03.998103  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:06.498109  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:04.252308  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:04.751451  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:05.252117  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:05.751808  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:05.759328  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:01:05.824086  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:05.824133  687772 retry.go:84] will retry after 18.8s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:06.105622  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:01:06.162303  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:06.251478  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:06.751336  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:07.251717  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:07.751827  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:08.251532  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:08.751779  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:01:08.998157  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:11.498214  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:09.251540  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:09.503324  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:01:09.562697  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:09.562742  687772 retry.go:84] will retry after 15.6s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:09.751986  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:10.251441  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:10.751356  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:11.251389  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:11.752279  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:12.251802  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:12.751324  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:13.251818  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:13.751866  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:01:13.998099  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:16.497424  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:14.252139  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:14.751583  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:15.252310  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:15.751625  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:16.251842  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:16.751830  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:17.252084  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:17.751783  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:18.251376  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:18.751964  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:01:18.498157  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:20.998023  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:19.252110  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:19.751822  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:20.251704  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:20.568697  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:01:20.639449  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:20.639500  687772 retry.go:84] will retry after 12s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:20.751734  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:21.252249  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:21.751801  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:22.251412  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:22.751366  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:23.252197  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:23.751273  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:01:23.498226  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:25.998233  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:24.252133  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:24.590346  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:01:24.662633  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:24.662674  687772 retry.go:84] will retry after 26.5s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:24.751773  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:25.211650  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1223 00:01:25.252110  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:01:25.284581  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:25.752154  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:26.251728  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:26.751953  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:27.251838  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:27.751740  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:28.251778  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:28.752182  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:01:28.498149  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:30.998068  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:29.252321  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:29.752290  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:30.251523  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:30.752205  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:31.251957  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:31.751675  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:32.251501  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:32.610650  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:01:32.668272  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:32.668321  687772 retry.go:84] will retry after 25.8s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:32.751294  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:33.251566  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:33.751514  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:01:33.497906  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:35.997708  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:34.251717  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:34.751347  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:35.251743  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:35.751789  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:36.251397  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:36.751367  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:37.251392  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:37.751873  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:38.251848  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:38.751798  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:01:38.498282  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:40.998196  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:39.251316  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:39.752288  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:40.251339  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:40.751372  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:41.252071  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:41.752308  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:42.252071  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:42.752021  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:01:42.776761  687772 logs.go:282] 0 containers: []
	W1223 00:01:42.776791  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:01:42.776839  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:01:42.797079  687772 logs.go:282] 0 containers: []
	W1223 00:01:42.797110  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:01:42.797168  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:01:42.817812  687772 logs.go:282] 0 containers: []
	W1223 00:01:42.817839  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:01:42.817907  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:01:42.837835  687772 logs.go:282] 0 containers: []
	W1223 00:01:42.837868  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:01:42.837924  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:01:42.858165  687772 logs.go:282] 0 containers: []
	W1223 00:01:42.858188  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:01:42.858231  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:01:42.878211  687772 logs.go:282] 0 containers: []
	W1223 00:01:42.878236  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:01:42.878281  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:01:42.897739  687772 logs.go:282] 0 containers: []
	W1223 00:01:42.897762  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:01:42.897817  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:01:42.918314  687772 logs.go:282] 0 containers: []
	W1223 00:01:42.918340  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:01:42.918352  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:01:42.918362  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:01:42.965734  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:01:42.965766  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:01:42.986555  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:01:42.986585  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:01:43.052069  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:01:43.042198    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:43.042815    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:43.044868    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:43.045810    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:43.047451    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:01:43.042198    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:43.042815    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:43.044868    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:43.045810    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:43.047451    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:01:43.052092  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:01:43.052108  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:01:43.072511  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:01:43.072542  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1223 00:01:43.497482  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:45.497538  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:45.620354  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:45.631856  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:01:45.651448  687772 logs.go:282] 0 containers: []
	W1223 00:01:45.651473  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:01:45.651528  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:01:45.671204  687772 logs.go:282] 0 containers: []
	W1223 00:01:45.671229  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:01:45.671279  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:01:45.690397  687772 logs.go:282] 0 containers: []
	W1223 00:01:45.690418  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:01:45.690470  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:01:45.711316  687772 logs.go:282] 0 containers: []
	W1223 00:01:45.711342  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:01:45.711400  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:01:45.731731  687772 logs.go:282] 0 containers: []
	W1223 00:01:45.731755  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:01:45.731812  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:01:45.751415  687772 logs.go:282] 0 containers: []
	W1223 00:01:45.751442  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:01:45.751500  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:01:45.770135  687772 logs.go:282] 0 containers: []
	W1223 00:01:45.770157  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:01:45.770202  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:01:45.789088  687772 logs.go:282] 0 containers: []
	W1223 00:01:45.789114  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:01:45.789127  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:01:45.789139  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:01:45.819714  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:01:45.819741  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:01:45.867983  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:01:45.868013  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:01:45.888018  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:01:45.888051  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:01:45.945247  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:01:45.937257    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:45.937924    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:45.939479    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:45.940165    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:45.941893    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:01:45.937257    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:45.937924    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:45.939479    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:45.940165    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:45.941893    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:01:45.945270  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:01:45.945286  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:01:47.390289  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:01:47.445750  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:47.445788  687772 retry.go:84] will retry after 18.6s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:48.464795  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:48.476515  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:01:48.499002  687772 logs.go:282] 0 containers: []
	W1223 00:01:48.499028  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:01:48.499082  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:01:48.523130  687772 logs.go:282] 0 containers: []
	W1223 00:01:48.523165  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:01:48.523225  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:01:48.547882  687772 logs.go:282] 0 containers: []
	W1223 00:01:48.547904  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:01:48.547950  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:01:48.567534  687772 logs.go:282] 0 containers: []
	W1223 00:01:48.567560  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:01:48.567627  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:01:48.590197  687772 logs.go:282] 0 containers: []
	W1223 00:01:48.590222  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:01:48.590280  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:01:48.609279  687772 logs.go:282] 0 containers: []
	W1223 00:01:48.609306  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:01:48.609357  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:01:48.628700  687772 logs.go:282] 0 containers: []
	W1223 00:01:48.628731  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:01:48.628795  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:01:48.648378  687772 logs.go:282] 0 containers: []
	W1223 00:01:48.648402  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:01:48.648416  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:01:48.648433  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:01:48.672700  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:01:48.672741  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:01:48.743864  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:01:48.735937    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:48.736663    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:48.737624    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:48.739145    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:48.739683    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:01:48.735937    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:48.736663    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:48.737624    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:48.739145    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:48.739683    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:01:48.743885  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:01:48.743897  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:01:48.762911  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:01:48.762941  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:01:48.793205  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:01:48.793235  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1223 00:01:47.498181  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:49.998144  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:51.128346  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:01:51.182480  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:51.182522  687772 retry.go:84] will retry after 29.5s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:51.341781  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:51.353222  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:01:51.373421  687772 logs.go:282] 0 containers: []
	W1223 00:01:51.373447  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:01:51.373497  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:01:51.392557  687772 logs.go:282] 0 containers: []
	W1223 00:01:51.392613  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:01:51.392675  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:01:51.411939  687772 logs.go:282] 0 containers: []
	W1223 00:01:51.411964  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:01:51.412018  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:01:51.430968  687772 logs.go:282] 0 containers: []
	W1223 00:01:51.430998  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:01:51.431054  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:01:51.451234  687772 logs.go:282] 0 containers: []
	W1223 00:01:51.451266  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:01:51.451329  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:01:51.470904  687772 logs.go:282] 0 containers: []
	W1223 00:01:51.470947  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:01:51.471017  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:01:51.490121  687772 logs.go:282] 0 containers: []
	W1223 00:01:51.490146  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:01:51.490201  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:01:51.511843  687772 logs.go:282] 0 containers: []
	W1223 00:01:51.511875  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:01:51.511891  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:01:51.511906  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:01:51.545106  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:01:51.545138  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:01:51.594541  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:01:51.594576  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:01:51.615455  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:01:51.615485  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:01:51.680061  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:01:51.672760    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:51.673328    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:51.674906    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:51.675511    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:51.677002    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:01:51.672760    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:51.673328    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:51.674906    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:51.675511    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:51.677002    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:01:51.680080  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:01:51.680091  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1223 00:01:52.497841  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:54.498062  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:56.498119  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:54.199432  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:54.211004  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:01:54.230469  687772 logs.go:282] 0 containers: []
	W1223 00:01:54.230498  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:01:54.230548  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:01:54.251212  687772 logs.go:282] 0 containers: []
	W1223 00:01:54.251245  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:01:54.251300  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:01:54.274147  687772 logs.go:282] 0 containers: []
	W1223 00:01:54.274177  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:01:54.274238  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:01:54.297381  687772 logs.go:282] 0 containers: []
	W1223 00:01:54.297413  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:01:54.297471  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:01:54.316290  687772 logs.go:282] 0 containers: []
	W1223 00:01:54.316315  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:01:54.316362  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:01:54.335315  687772 logs.go:282] 0 containers: []
	W1223 00:01:54.335339  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:01:54.335393  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:01:54.354058  687772 logs.go:282] 0 containers: []
	W1223 00:01:54.354089  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:01:54.354144  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:01:54.372661  687772 logs.go:282] 0 containers: []
	W1223 00:01:54.372686  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:01:54.372700  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:01:54.372715  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:01:54.417565  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:01:54.417601  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:01:54.438013  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:01:54.438041  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:01:54.497552  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:01:54.488781    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:54.489417    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:54.491051    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:54.491532    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:54.493218    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:01:54.488781    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:54.489417    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:54.491051    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:54.491532    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:54.493218    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:01:54.497575  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:01:54.497589  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:01:54.517495  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:01:54.517523  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:01:57.056037  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:57.067478  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:01:57.087084  687772 logs.go:282] 0 containers: []
	W1223 00:01:57.087114  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:01:57.087183  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:01:57.105193  687772 logs.go:282] 0 containers: []
	W1223 00:01:57.105218  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:01:57.105270  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:01:57.122859  687772 logs.go:282] 0 containers: []
	W1223 00:01:57.122885  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:01:57.122931  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:01:57.141996  687772 logs.go:282] 0 containers: []
	W1223 00:01:57.142021  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:01:57.142074  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:01:57.160005  687772 logs.go:282] 0 containers: []
	W1223 00:01:57.160032  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:01:57.160083  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:01:57.178889  687772 logs.go:282] 0 containers: []
	W1223 00:01:57.178915  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:01:57.178989  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:01:57.196419  687772 logs.go:282] 0 containers: []
	W1223 00:01:57.196446  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:01:57.196498  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:01:57.214764  687772 logs.go:282] 0 containers: []
	W1223 00:01:57.214790  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:01:57.214804  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:01:57.214817  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:01:57.266333  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:01:57.266370  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:01:57.289301  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:01:57.289330  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:01:57.347060  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:01:57.339792    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:57.340363    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:57.341983    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:57.342452    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:57.344014    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:01:57.339792    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:57.340363    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:57.341983    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:57.342452    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:57.344014    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:01:57.347099  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:01:57.347117  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:01:57.370222  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:01:57.370257  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:01:58.466074  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:01:58.519779  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:58.519828  687772 retry.go:84] will retry after 42.4s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1223 00:01:58.498177  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:02:00.998033  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:59.898063  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:59.909410  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:01:59.927950  687772 logs.go:282] 0 containers: []
	W1223 00:01:59.927974  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:01:59.928017  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:01:59.946773  687772 logs.go:282] 0 containers: []
	W1223 00:01:59.946800  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:01:59.946861  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:01:59.964419  687772 logs.go:282] 0 containers: []
	W1223 00:01:59.964443  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:01:59.964500  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:01:59.982454  687772 logs.go:282] 0 containers: []
	W1223 00:01:59.982478  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:01:59.982537  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:00.000838  687772 logs.go:282] 0 containers: []
	W1223 00:02:00.000860  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:00.000929  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:00.018673  687772 logs.go:282] 0 containers: []
	W1223 00:02:00.018696  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:00.018747  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:00.035944  687772 logs.go:282] 0 containers: []
	W1223 00:02:00.035973  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:00.036027  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:00.053640  687772 logs.go:282] 0 containers: []
	W1223 00:02:00.053666  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:00.053679  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:00.053700  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:00.098844  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:00.098870  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:00.120198  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:00.120224  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:00.175459  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:00.168568    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:00.169147    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:00.170692    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:00.171078    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:00.172563    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:00.168568    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:00.169147    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:00.170692    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:00.171078    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:00.172563    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:00.175477  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:00.175490  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:00.194106  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:00.194146  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:02.722361  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:02.733721  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:02.753939  687772 logs.go:282] 0 containers: []
	W1223 00:02:02.753963  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:02.754025  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:02.773570  687772 logs.go:282] 0 containers: []
	W1223 00:02:02.773610  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:02.773665  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:02.793427  687772 logs.go:282] 0 containers: []
	W1223 00:02:02.793451  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:02.793514  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:02.812154  687772 logs.go:282] 0 containers: []
	W1223 00:02:02.812183  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:02.812241  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:02.830757  687772 logs.go:282] 0 containers: []
	W1223 00:02:02.830777  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:02.830819  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:02.849140  687772 logs.go:282] 0 containers: []
	W1223 00:02:02.849163  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:02.849206  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:02.867505  687772 logs.go:282] 0 containers: []
	W1223 00:02:02.867529  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:02.867584  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:02.885892  687772 logs.go:282] 0 containers: []
	W1223 00:02:02.885920  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:02.885935  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:02.885950  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:02.933880  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:02.933906  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:02.955273  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:02.955304  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:03.009924  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:03.002806    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:03.003364    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:03.004891    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:03.005360    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:03.006852    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:03.002806    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:03.003364    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:03.004891    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:03.005360    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:03.006852    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:03.009951  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:03.010012  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:03.028798  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:03.028821  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1223 00:02:03.497953  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:02:05.997506  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:02:05.557718  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:05.569769  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:05.588873  687772 logs.go:282] 0 containers: []
	W1223 00:02:05.588899  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:05.588946  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:05.607265  687772 logs.go:282] 0 containers: []
	W1223 00:02:05.607289  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:05.607342  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:05.625761  687772 logs.go:282] 0 containers: []
	W1223 00:02:05.625790  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:05.625860  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:05.643443  687772 logs.go:282] 0 containers: []
	W1223 00:02:05.643464  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:05.643513  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:05.661241  687772 logs.go:282] 0 containers: []
	W1223 00:02:05.661266  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:05.661314  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:05.679744  687772 logs.go:282] 0 containers: []
	W1223 00:02:05.679764  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:05.679805  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:05.697808  687772 logs.go:282] 0 containers: []
	W1223 00:02:05.697831  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:05.697878  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:05.716222  687772 logs.go:282] 0 containers: []
	W1223 00:02:05.716245  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:05.716255  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:05.716269  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:05.772404  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:05.772437  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:05.793610  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:05.793643  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:05.849453  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:05.842499    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:05.842938    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:05.844491    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:05.844916    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:05.846469    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:05.842499    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:05.842938    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:05.844491    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:05.844916    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:05.846469    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:05.849479  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:05.849493  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:05.868250  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:05.868273  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:05.997019  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:02:06.048845  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1223 00:02:06.048961  687772 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1223 00:02:08.398283  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:08.409794  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:08.428854  687772 logs.go:282] 0 containers: []
	W1223 00:02:08.428878  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:08.428927  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:08.447223  687772 logs.go:282] 0 containers: []
	W1223 00:02:08.447248  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:08.447292  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:08.464797  687772 logs.go:282] 0 containers: []
	W1223 00:02:08.464816  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:08.464857  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:08.483364  687772 logs.go:282] 0 containers: []
	W1223 00:02:08.483386  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:08.483449  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:08.501985  687772 logs.go:282] 0 containers: []
	W1223 00:02:08.502014  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:08.502064  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:08.520995  687772 logs.go:282] 0 containers: []
	W1223 00:02:08.521020  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:08.521071  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:08.542089  687772 logs.go:282] 0 containers: []
	W1223 00:02:08.542109  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:08.542149  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:08.560448  687772 logs.go:282] 0 containers: []
	W1223 00:02:08.560468  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:08.560478  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:08.560489  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:08.606044  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:08.606073  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:08.626314  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:08.626339  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:08.681455  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:08.674268    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:08.674839    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:08.676419    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:08.676835    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:08.678358    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:08.674268    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:08.674839    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:08.676419    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:08.676835    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:08.678358    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:08.681477  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:08.681490  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:08.699804  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:08.699830  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1223 00:02:07.998312  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:02:10.498076  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:02:11.230050  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:11.241320  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:11.260234  687772 logs.go:282] 0 containers: []
	W1223 00:02:11.260256  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:11.260301  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:11.279535  687772 logs.go:282] 0 containers: []
	W1223 00:02:11.279558  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:11.279635  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:11.297775  687772 logs.go:282] 0 containers: []
	W1223 00:02:11.297799  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:11.297843  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:11.315756  687772 logs.go:282] 0 containers: []
	W1223 00:02:11.315780  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:11.315823  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:11.333828  687772 logs.go:282] 0 containers: []
	W1223 00:02:11.333850  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:11.333894  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:11.351608  687772 logs.go:282] 0 containers: []
	W1223 00:02:11.351631  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:11.351673  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:11.371003  687772 logs.go:282] 0 containers: []
	W1223 00:02:11.371024  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:11.371065  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:11.388642  687772 logs.go:282] 0 containers: []
	W1223 00:02:11.388662  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:11.388671  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:11.388681  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:11.406664  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:11.406687  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:11.434828  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:11.434851  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:11.481940  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:11.481966  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:11.503051  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:11.503077  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:11.563912  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:11.555767    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:11.556288    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:11.558989    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:11.559407    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:11.560934    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:11.555767    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:11.556288    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:11.558989    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:11.559407    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:11.560934    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:14.064094  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:02:12.997638  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:02:15.497547  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:02:15.997757  622784 node_ready.go:38] duration metric: took 6m0.000870759s for node "no-preload-063943" to be "Ready" ...
	I1223 00:02:15.999489  622784 out.go:203] 
	W1223 00:02:16.002745  622784 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1223 00:02:16.002767  622784 out.go:285] * 
	W1223 00:02:16.002971  622784 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1223 00:02:16.004060  622784 out.go:203] 
	I1223 00:02:14.075388  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:14.094051  687772 logs.go:282] 0 containers: []
	W1223 00:02:14.094075  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:14.094123  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:14.112428  687772 logs.go:282] 0 containers: []
	W1223 00:02:14.112454  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:14.112511  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:14.130910  687772 logs.go:282] 0 containers: []
	W1223 00:02:14.130935  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:14.130991  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:14.149172  687772 logs.go:282] 0 containers: []
	W1223 00:02:14.149194  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:14.149247  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:14.167387  687772 logs.go:282] 0 containers: []
	W1223 00:02:14.167414  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:14.167470  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:14.187009  687772 logs.go:282] 0 containers: []
	W1223 00:02:14.187034  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:14.187080  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:14.205514  687772 logs.go:282] 0 containers: []
	W1223 00:02:14.205537  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:14.205604  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:14.223867  687772 logs.go:282] 0 containers: []
	W1223 00:02:14.223893  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:14.223906  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:14.223919  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:14.278850  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:14.272061    5073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:14.272519    5073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:14.274102    5073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:14.274491    5073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:14.275975    5073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:14.272061    5073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:14.272519    5073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:14.274102    5073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:14.274491    5073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:14.275975    5073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:14.278877  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:14.278904  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:14.297791  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:14.297817  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:14.329010  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:14.329035  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:14.375196  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:14.375228  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:16.895760  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:16.908501  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:16.928330  687772 logs.go:282] 0 containers: []
	W1223 00:02:16.928357  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:16.928403  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:16.947248  687772 logs.go:282] 0 containers: []
	W1223 00:02:16.947272  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:16.947319  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:16.967240  687772 logs.go:282] 0 containers: []
	W1223 00:02:16.967266  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:16.967318  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:16.986942  687772 logs.go:282] 0 containers: []
	W1223 00:02:16.986966  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:16.987025  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:17.008674  687772 logs.go:282] 0 containers: []
	W1223 00:02:17.008702  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:17.008760  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:17.030466  687772 logs.go:282] 0 containers: []
	W1223 00:02:17.030492  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:17.030548  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:17.051687  687772 logs.go:282] 0 containers: []
	W1223 00:02:17.051719  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:17.051773  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:17.073457  687772 logs.go:282] 0 containers: []
	W1223 00:02:17.073486  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:17.073502  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:17.073521  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:17.131973  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:17.132010  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:17.157397  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:17.157433  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:17.217639  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:17.209809    5249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:17.210419    5249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:17.212231    5249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:17.212725    5249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:17.214466    5249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:17.209809    5249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:17.210419    5249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:17.212231    5249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:17.212725    5249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:17.214466    5249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:17.217669  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:17.217683  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:17.239498  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:17.239530  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:19.769550  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:19.782360  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:19.802423  687772 logs.go:282] 0 containers: []
	W1223 00:02:19.802446  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:19.802497  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:19.821183  687772 logs.go:282] 0 containers: []
	W1223 00:02:19.821214  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:19.821269  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:19.840343  687772 logs.go:282] 0 containers: []
	W1223 00:02:19.840369  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:19.840426  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:19.857810  687772 logs.go:282] 0 containers: []
	W1223 00:02:19.857835  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:19.857878  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:19.875458  687772 logs.go:282] 0 containers: []
	W1223 00:02:19.875481  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:19.875523  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:19.893840  687772 logs.go:282] 0 containers: []
	W1223 00:02:19.893864  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:19.893916  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:19.912030  687772 logs.go:282] 0 containers: []
	W1223 00:02:19.912053  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:19.912094  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:19.930049  687772 logs.go:282] 0 containers: []
	W1223 00:02:19.930066  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:19.930077  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:19.930088  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:19.976279  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:19.976304  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:19.995814  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:19.995837  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:20.054797  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:20.046679    5408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:20.047269    5408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:20.048969    5408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:20.049570    5408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:20.051329    5408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:20.046679    5408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:20.047269    5408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:20.048969    5408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:20.049570    5408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:20.051329    5408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:20.054819  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:20.054833  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:20.074562  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:20.074588  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:20.651032  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:02:20.702678  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1223 00:02:20.702795  687772 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1223 00:02:22.602868  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:22.614420  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:22.633871  687772 logs.go:282] 0 containers: []
	W1223 00:02:22.633892  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:22.633942  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:22.652376  687772 logs.go:282] 0 containers: []
	W1223 00:02:22.652403  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:22.652454  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:22.670318  687772 logs.go:282] 0 containers: []
	W1223 00:02:22.670340  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:22.670384  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:22.688893  687772 logs.go:282] 0 containers: []
	W1223 00:02:22.688913  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:22.688966  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:22.707579  687772 logs.go:282] 0 containers: []
	W1223 00:02:22.707614  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:22.707667  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:22.726147  687772 logs.go:282] 0 containers: []
	W1223 00:02:22.726174  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:22.726230  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:22.744895  687772 logs.go:282] 0 containers: []
	W1223 00:02:22.744919  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:22.744975  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:22.765807  687772 logs.go:282] 0 containers: []
	W1223 00:02:22.765834  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:22.765848  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:22.765858  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:22.786075  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:22.786111  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:22.814010  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:22.814034  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:22.859717  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:22.859741  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:22.878865  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:22.878889  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:22.933790  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:22.926530    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:22.927059    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:22.928672    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:22.929122    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:22.930663    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:22.926530    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:22.927059    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:22.928672    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:22.929122    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:22.930663    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:25.434500  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:25.446396  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:25.466157  687772 logs.go:282] 0 containers: []
	W1223 00:02:25.466184  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:25.466237  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:25.484799  687772 logs.go:282] 0 containers: []
	W1223 00:02:25.484827  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:25.484899  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:25.503442  687772 logs.go:282] 0 containers: []
	W1223 00:02:25.503470  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:25.503516  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:25.522088  687772 logs.go:282] 0 containers: []
	W1223 00:02:25.522114  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:25.522174  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:25.540899  687772 logs.go:282] 0 containers: []
	W1223 00:02:25.540924  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:25.540979  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:25.559853  687772 logs.go:282] 0 containers: []
	W1223 00:02:25.559877  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:25.559929  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:25.578537  687772 logs.go:282] 0 containers: []
	W1223 00:02:25.578560  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:25.578619  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:25.597442  687772 logs.go:282] 0 containers: []
	W1223 00:02:25.597465  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:25.597476  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:25.597491  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:25.617688  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:25.617718  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:25.672737  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:25.665784    5752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:25.666239    5752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:25.667786    5752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:25.668269    5752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:25.669751    5752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:25.665784    5752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:25.666239    5752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:25.667786    5752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:25.668269    5752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:25.669751    5752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:25.672761  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:25.672777  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:25.691559  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:25.691585  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:25.719893  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:25.719918  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:28.271777  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:28.284248  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:28.304042  687772 logs.go:282] 0 containers: []
	W1223 00:02:28.304069  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:28.304126  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:28.322682  687772 logs.go:282] 0 containers: []
	W1223 00:02:28.322711  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:28.322769  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:28.340899  687772 logs.go:282] 0 containers: []
	W1223 00:02:28.340925  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:28.340974  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:28.359896  687772 logs.go:282] 0 containers: []
	W1223 00:02:28.359922  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:28.359976  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:28.378627  687772 logs.go:282] 0 containers: []
	W1223 00:02:28.378650  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:28.378700  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:28.396793  687772 logs.go:282] 0 containers: []
	W1223 00:02:28.396821  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:28.396870  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:28.415408  687772 logs.go:282] 0 containers: []
	W1223 00:02:28.415434  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:28.415480  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:28.434108  687772 logs.go:282] 0 containers: []
	W1223 00:02:28.434131  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:28.434142  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:28.434153  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:28.462377  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:28.462405  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:28.509046  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:28.509080  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:28.531034  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:28.531065  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:28.587866  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:28.580703    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:28.581207    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:28.582777    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:28.583174    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:28.584683    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:28.580703    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:28.581207    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:28.582777    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:28.583174    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:28.584683    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:28.587904  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:28.587920  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:31.109730  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:31.121215  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:31.140775  687772 logs.go:282] 0 containers: []
	W1223 00:02:31.140799  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:31.140853  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:31.160694  687772 logs.go:282] 0 containers: []
	W1223 00:02:31.160719  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:31.160766  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:31.180064  687772 logs.go:282] 0 containers: []
	W1223 00:02:31.180087  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:31.180133  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:31.198777  687772 logs.go:282] 0 containers: []
	W1223 00:02:31.198802  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:31.198856  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:31.217848  687772 logs.go:282] 0 containers: []
	W1223 00:02:31.217875  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:31.217923  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:31.237167  687772 logs.go:282] 0 containers: []
	W1223 00:02:31.237196  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:31.237251  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:31.257964  687772 logs.go:282] 0 containers: []
	W1223 00:02:31.257995  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:31.258056  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:31.279556  687772 logs.go:282] 0 containers: []
	W1223 00:02:31.279581  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:31.279607  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:31.279624  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:31.336644  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:31.329447    6086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:31.329953    6086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:31.331523    6086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:31.332012    6086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:31.333520    6086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:31.329447    6086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:31.329953    6086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:31.331523    6086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:31.332012    6086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:31.333520    6086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:31.336664  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:31.336675  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:31.355102  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:31.355129  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:31.384063  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:31.384096  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:31.429299  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:31.429337  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:33.951226  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:33.962558  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:33.981280  687772 logs.go:282] 0 containers: []
	W1223 00:02:33.981301  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:33.981353  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:34.000326  687772 logs.go:282] 0 containers: []
	W1223 00:02:34.000351  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:34.000417  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:34.020043  687772 logs.go:282] 0 containers: []
	W1223 00:02:34.020069  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:34.020114  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:34.042279  687772 logs.go:282] 0 containers: []
	W1223 00:02:34.042304  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:34.042363  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:34.060550  687772 logs.go:282] 0 containers: []
	W1223 00:02:34.060571  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:34.060631  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:34.078917  687772 logs.go:282] 0 containers: []
	W1223 00:02:34.078939  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:34.078986  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:34.098151  687772 logs.go:282] 0 containers: []
	W1223 00:02:34.098177  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:34.098224  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:34.117100  687772 logs.go:282] 0 containers: []
	W1223 00:02:34.117124  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:34.117137  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:34.117153  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:34.138330  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:34.138358  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:34.193562  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:34.186453    6246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:34.187024    6246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:34.188567    6246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:34.188984    6246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:34.190534    6246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:34.186453    6246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:34.187024    6246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:34.188567    6246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:34.188984    6246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:34.190534    6246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:34.193588  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:34.193615  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:34.212264  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:34.212288  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:34.240368  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:34.240399  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:36.793206  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:36.804783  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:36.823535  687772 logs.go:282] 0 containers: []
	W1223 00:02:36.823556  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:36.823618  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:36.841856  687772 logs.go:282] 0 containers: []
	W1223 00:02:36.841879  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:36.841933  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:36.860292  687772 logs.go:282] 0 containers: []
	W1223 00:02:36.860319  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:36.860360  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:36.878691  687772 logs.go:282] 0 containers: []
	W1223 00:02:36.878719  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:36.878773  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:36.897448  687772 logs.go:282] 0 containers: []
	W1223 00:02:36.897472  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:36.897519  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:36.916562  687772 logs.go:282] 0 containers: []
	W1223 00:02:36.916585  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:36.916654  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:36.934784  687772 logs.go:282] 0 containers: []
	W1223 00:02:36.934807  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:36.934865  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:36.953285  687772 logs.go:282] 0 containers: []
	W1223 00:02:36.953305  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:36.953317  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:36.953328  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:37.000978  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:37.001008  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:37.021185  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:37.021217  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:37.081314  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:37.074072    6419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:37.074651    6419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:37.076251    6419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:37.076693    6419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:37.078295    6419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:37.074072    6419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:37.074651    6419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:37.076251    6419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:37.076693    6419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:37.078295    6419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:37.081345  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:37.081366  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:37.100453  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:37.100480  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:39.629693  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:39.641060  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:39.660163  687772 logs.go:282] 0 containers: []
	W1223 00:02:39.660187  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:39.660232  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:39.680357  687772 logs.go:282] 0 containers: []
	W1223 00:02:39.680379  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:39.680422  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:39.699821  687772 logs.go:282] 0 containers: []
	W1223 00:02:39.699853  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:39.699916  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:39.719383  687772 logs.go:282] 0 containers: []
	W1223 00:02:39.719407  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:39.719460  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:39.739699  687772 logs.go:282] 0 containers: []
	W1223 00:02:39.739726  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:39.739800  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:39.758766  687772 logs.go:282] 0 containers: []
	W1223 00:02:39.758791  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:39.758849  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:39.777656  687772 logs.go:282] 0 containers: []
	W1223 00:02:39.777690  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:39.777752  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:39.796962  687772 logs.go:282] 0 containers: []
	W1223 00:02:39.796984  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:39.796995  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:39.797006  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:39.842320  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:39.842347  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:39.862054  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:39.862080  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:39.916930  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:39.910012    6584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:39.910586    6584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:39.912148    6584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:39.912544    6584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:39.913971    6584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:39.910012    6584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:39.910586    6584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:39.912148    6584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:39.912544    6584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:39.913971    6584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:39.916953  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:39.916970  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:39.935277  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:39.935306  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:40.946301  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:02:41.000005  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1223 00:02:41.000109  687772 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1223 00:02:41.001884  687772 out.go:179] * Enabled addons: 
	I1223 00:02:41.002846  687772 addons.go:530] duration metric: took 1m58.614813363s for enable addons: enabled=[]
	I1223 00:02:42.463498  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:42.474861  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:42.493733  687772 logs.go:282] 0 containers: []
	W1223 00:02:42.493756  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:42.493806  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:42.513344  687772 logs.go:282] 0 containers: []
	W1223 00:02:42.513376  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:42.513436  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:42.537617  687772 logs.go:282] 0 containers: []
	W1223 00:02:42.537647  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:42.537701  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:42.557673  687772 logs.go:282] 0 containers: []
	W1223 00:02:42.557698  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:42.557746  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:42.576567  687772 logs.go:282] 0 containers: []
	W1223 00:02:42.576604  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:42.576669  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:42.595813  687772 logs.go:282] 0 containers: []
	W1223 00:02:42.595836  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:42.595890  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:42.615074  687772 logs.go:282] 0 containers: []
	W1223 00:02:42.615101  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:42.615154  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:42.634655  687772 logs.go:282] 0 containers: []
	W1223 00:02:42.634685  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:42.634702  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:42.634719  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:42.654826  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:42.654852  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:42.710552  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:42.703396    6762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:42.703921    6762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:42.705447    6762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:42.705896    6762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:42.707365    6762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:42.703396    6762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:42.703921    6762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:42.705447    6762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:42.705896    6762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:42.707365    6762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:42.710573  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:42.710585  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:42.729412  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:42.729439  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:42.758163  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:42.758187  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:45.306682  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:45.318226  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:45.337265  687772 logs.go:282] 0 containers: []
	W1223 00:02:45.337287  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:45.337343  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:45.355924  687772 logs.go:282] 0 containers: []
	W1223 00:02:45.355945  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:45.355990  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:45.374282  687772 logs.go:282] 0 containers: []
	W1223 00:02:45.374303  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:45.374348  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:45.394500  687772 logs.go:282] 0 containers: []
	W1223 00:02:45.394533  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:45.394584  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:45.412466  687772 logs.go:282] 0 containers: []
	W1223 00:02:45.412489  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:45.412538  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:45.431148  687772 logs.go:282] 0 containers: []
	W1223 00:02:45.431185  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:45.431234  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:45.450281  687772 logs.go:282] 0 containers: []
	W1223 00:02:45.450303  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:45.450352  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:45.468758  687772 logs.go:282] 0 containers: []
	W1223 00:02:45.468787  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:45.468804  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:45.468818  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:45.520708  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:45.520742  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:45.542983  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:45.543013  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:45.598778  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:45.591672    6934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:45.592201    6934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:45.593887    6934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:45.594424    6934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:45.595931    6934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:45.591672    6934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:45.592201    6934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:45.593887    6934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:45.594424    6934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:45.595931    6934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:45.598798  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:45.598812  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:45.617903  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:45.617931  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:48.156370  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:48.167842  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:48.187202  687772 logs.go:282] 0 containers: []
	W1223 00:02:48.187224  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:48.187268  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:48.206448  687772 logs.go:282] 0 containers: []
	W1223 00:02:48.206471  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:48.206516  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:48.225302  687772 logs.go:282] 0 containers: []
	W1223 00:02:48.225322  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:48.225373  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:48.244155  687772 logs.go:282] 0 containers: []
	W1223 00:02:48.244185  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:48.244245  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:48.264312  687772 logs.go:282] 0 containers: []
	W1223 00:02:48.264350  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:48.264418  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:48.284233  687772 logs.go:282] 0 containers: []
	W1223 00:02:48.284260  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:48.284317  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:48.303899  687772 logs.go:282] 0 containers: []
	W1223 00:02:48.303924  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:48.303973  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:48.324302  687772 logs.go:282] 0 containers: []
	W1223 00:02:48.324335  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:48.324350  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:48.324366  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:48.345435  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:48.345463  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:48.402949  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:48.395302    7094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:48.395834    7094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:48.397506    7094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:48.398024    7094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:48.399634    7094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:48.395302    7094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:48.395834    7094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:48.397506    7094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:48.398024    7094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:48.399634    7094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:48.402972  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:48.402984  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:48.423927  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:48.423954  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:48.452771  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:48.452799  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:51.001239  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:51.013175  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:51.032822  687772 logs.go:282] 0 containers: []
	W1223 00:02:51.032846  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:51.032898  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:51.051652  687772 logs.go:282] 0 containers: []
	W1223 00:02:51.051682  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:51.051724  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:51.070373  687772 logs.go:282] 0 containers: []
	W1223 00:02:51.070395  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:51.070448  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:51.088655  687772 logs.go:282] 0 containers: []
	W1223 00:02:51.088676  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:51.088732  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:51.108004  687772 logs.go:282] 0 containers: []
	W1223 00:02:51.108025  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:51.108078  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:51.126636  687772 logs.go:282] 0 containers: []
	W1223 00:02:51.126662  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:51.126728  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:51.145355  687772 logs.go:282] 0 containers: []
	W1223 00:02:51.145385  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:51.145451  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:51.164355  687772 logs.go:282] 0 containers: []
	W1223 00:02:51.164384  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:51.164396  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:51.164409  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:51.191698  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:51.191724  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:51.238383  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:51.238411  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:51.260545  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:51.260580  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:51.318147  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:51.310651    7281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:51.311192    7281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:51.312772    7281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:51.313264    7281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:51.314877    7281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:51.310651    7281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:51.311192    7281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:51.312772    7281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:51.313264    7281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:51.314877    7281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:51.318168  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:51.318182  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:53.838848  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:53.850007  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:53.868584  687772 logs.go:282] 0 containers: []
	W1223 00:02:53.868622  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:53.868663  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:53.887617  687772 logs.go:282] 0 containers: []
	W1223 00:02:53.887640  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:53.887687  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:53.906384  687772 logs.go:282] 0 containers: []
	W1223 00:02:53.906409  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:53.906453  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:53.924912  687772 logs.go:282] 0 containers: []
	W1223 00:02:53.924938  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:53.924988  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:53.943400  687772 logs.go:282] 0 containers: []
	W1223 00:02:53.943425  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:53.943477  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:53.961941  687772 logs.go:282] 0 containers: []
	W1223 00:02:53.961969  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:53.962024  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:53.980915  687772 logs.go:282] 0 containers: []
	W1223 00:02:53.980941  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:53.980987  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:53.998798  687772 logs.go:282] 0 containers: []
	W1223 00:02:53.998817  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:53.998827  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:53.998839  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:54.017064  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:54.017089  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:54.045091  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:54.045114  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:54.090278  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:54.090307  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:54.111890  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:54.111920  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:54.166797  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:54.159777    7452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:54.160366    7452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:54.161942    7452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:54.162343    7452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:54.163852    7452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:54.159777    7452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:54.160366    7452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:54.161942    7452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:54.162343    7452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:54.163852    7452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:56.668571  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:56.680147  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:56.699018  687772 logs.go:282] 0 containers: []
	W1223 00:02:56.699042  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:56.699093  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:56.716996  687772 logs.go:282] 0 containers: []
	W1223 00:02:56.717019  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:56.717068  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:56.735529  687772 logs.go:282] 0 containers: []
	W1223 00:02:56.735565  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:56.735644  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:56.756677  687772 logs.go:282] 0 containers: []
	W1223 00:02:56.756701  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:56.756757  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:56.777819  687772 logs.go:282] 0 containers: []
	W1223 00:02:56.777850  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:56.777905  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:56.799967  687772 logs.go:282] 0 containers: []
	W1223 00:02:56.799997  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:56.800054  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:56.818811  687772 logs.go:282] 0 containers: []
	W1223 00:02:56.818836  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:56.818881  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:56.837426  687772 logs.go:282] 0 containers: []
	W1223 00:02:56.837461  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:56.837473  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:56.837487  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:56.893850  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:56.886682    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:56.887224    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:56.888853    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:56.889345    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:56.890925    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:56.886682    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:56.887224    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:56.888853    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:56.889345    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:56.890925    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:56.893879  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:56.893894  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:56.912125  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:56.912151  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:56.939250  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:56.939279  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:56.986566  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:56.986599  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:59.506330  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:59.518294  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:59.540502  687772 logs.go:282] 0 containers: []
	W1223 00:02:59.540529  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:59.540586  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:59.559288  687772 logs.go:282] 0 containers: []
	W1223 00:02:59.559322  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:59.559372  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:59.577919  687772 logs.go:282] 0 containers: []
	W1223 00:02:59.577945  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:59.578002  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:59.596632  687772 logs.go:282] 0 containers: []
	W1223 00:02:59.596655  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:59.596705  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:59.614750  687772 logs.go:282] 0 containers: []
	W1223 00:02:59.614775  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:59.614826  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:59.632989  687772 logs.go:282] 0 containers: []
	W1223 00:02:59.633007  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:59.633057  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:59.650953  687772 logs.go:282] 0 containers: []
	W1223 00:02:59.650972  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:59.651020  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:59.669171  687772 logs.go:282] 0 containers: []
	W1223 00:02:59.669190  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:59.669202  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:59.669214  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:59.713997  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:59.714026  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:59.733682  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:59.733709  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:59.801000  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:59.792717    7761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:59.793343    7761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:59.796120    7761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:59.796686    7761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:59.797955    7761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:59.792717    7761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:59.793343    7761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:59.796120    7761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:59.796686    7761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:59.797955    7761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:59.801018  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:59.801029  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:59.819988  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:59.820018  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:02.350019  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:02.361484  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:02.380765  687772 logs.go:282] 0 containers: []
	W1223 00:03:02.380793  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:02.380841  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:02.398822  687772 logs.go:282] 0 containers: []
	W1223 00:03:02.398847  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:02.398892  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:02.416468  687772 logs.go:282] 0 containers: []
	W1223 00:03:02.416488  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:02.416530  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:02.435155  687772 logs.go:282] 0 containers: []
	W1223 00:03:02.435182  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:02.435237  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:02.453935  687772 logs.go:282] 0 containers: []
	W1223 00:03:02.453961  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:02.454012  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:02.472347  687772 logs.go:282] 0 containers: []
	W1223 00:03:02.472376  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:02.472445  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:02.490480  687772 logs.go:282] 0 containers: []
	W1223 00:03:02.490505  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:02.490562  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:02.510458  687772 logs.go:282] 0 containers: []
	W1223 00:03:02.510485  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:02.510498  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:02.510509  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:02.541744  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:02.541769  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:02.587578  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:02.587619  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:02.607135  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:02.607161  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:02.663082  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:02.655098    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:02.655704    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:02.657931    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:02.658437    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:02.660015    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:02.655098    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:02.655704    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:02.657931    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:02.658437    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:02.660015    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:02.663104  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:02.663117  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:05.182740  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:05.194033  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:05.212783  687772 logs.go:282] 0 containers: []
	W1223 00:03:05.212809  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:05.212868  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:05.230615  687772 logs.go:282] 0 containers: []
	W1223 00:03:05.230643  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:05.230687  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:05.249068  687772 logs.go:282] 0 containers: []
	W1223 00:03:05.249091  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:05.249140  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:05.268884  687772 logs.go:282] 0 containers: []
	W1223 00:03:05.268913  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:05.268965  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:05.288077  687772 logs.go:282] 0 containers: []
	W1223 00:03:05.288103  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:05.288159  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:05.306886  687772 logs.go:282] 0 containers: []
	W1223 00:03:05.306916  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:05.306970  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:05.325552  687772 logs.go:282] 0 containers: []
	W1223 00:03:05.325579  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:05.325644  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:05.344222  687772 logs.go:282] 0 containers: []
	W1223 00:03:05.344252  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:05.344264  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:05.344276  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:05.389222  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:05.389252  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:05.409357  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:05.409384  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:05.466244  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:05.459201    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:05.459757    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:05.461346    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:05.461781    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:05.463277    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:05.459201    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:05.459757    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:05.461346    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:05.461781    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:05.463277    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:05.466269  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:05.466285  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:05.484803  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:05.484830  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:08.013719  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:08.026534  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:08.046545  687772 logs.go:282] 0 containers: []
	W1223 00:03:08.046567  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:08.046633  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:08.065353  687772 logs.go:282] 0 containers: []
	W1223 00:03:08.065375  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:08.065423  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:08.084081  687772 logs.go:282] 0 containers: []
	W1223 00:03:08.084109  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:08.084156  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:08.102488  687772 logs.go:282] 0 containers: []
	W1223 00:03:08.102514  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:08.102570  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:08.121317  687772 logs.go:282] 0 containers: []
	W1223 00:03:08.121347  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:08.121391  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:08.139209  687772 logs.go:282] 0 containers: []
	W1223 00:03:08.139232  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:08.139282  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:08.157445  687772 logs.go:282] 0 containers: []
	W1223 00:03:08.157465  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:08.157510  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:08.177073  687772 logs.go:282] 0 containers: []
	W1223 00:03:08.177101  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:08.177115  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:08.177131  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:08.195188  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:08.195222  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:08.223256  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:08.223282  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:08.270668  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:08.270696  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:08.290331  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:08.290355  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:08.344801  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:08.337927    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:08.338445    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:08.340039    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:08.340476    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:08.341971    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:08.337927    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:08.338445    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:08.340039    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:08.340476    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:08.341971    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:10.846497  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:10.857798  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:10.876797  687772 logs.go:282] 0 containers: []
	W1223 00:03:10.876818  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:10.876863  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:10.895838  687772 logs.go:282] 0 containers: []
	W1223 00:03:10.895862  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:10.895907  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:10.913971  687772 logs.go:282] 0 containers: []
	W1223 00:03:10.913996  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:10.914038  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:10.932422  687772 logs.go:282] 0 containers: []
	W1223 00:03:10.932449  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:10.932501  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:10.951013  687772 logs.go:282] 0 containers: []
	W1223 00:03:10.951034  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:10.951076  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:10.969170  687772 logs.go:282] 0 containers: []
	W1223 00:03:10.969198  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:10.969242  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:10.988274  687772 logs.go:282] 0 containers: []
	W1223 00:03:10.988332  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:10.988382  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:11.006849  687772 logs.go:282] 0 containers: []
	W1223 00:03:11.006875  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:11.006889  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:11.006906  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:11.059569  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:11.059619  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:11.079808  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:11.079835  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:11.134768  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:11.127709    8435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:11.128312    8435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:11.129883    8435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:11.130324    8435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:11.131860    8435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:11.127709    8435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:11.128312    8435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:11.129883    8435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:11.130324    8435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:11.131860    8435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:11.134794  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:11.134817  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:11.153181  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:11.153207  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:13.681510  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:13.692957  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:13.711987  687772 logs.go:282] 0 containers: []
	W1223 00:03:13.712017  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:13.712069  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:13.730999  687772 logs.go:282] 0 containers: []
	W1223 00:03:13.731026  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:13.731083  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:13.753677  687772 logs.go:282] 0 containers: []
	W1223 00:03:13.753709  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:13.753769  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:13.779299  687772 logs.go:282] 0 containers: []
	W1223 00:03:13.779328  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:13.779389  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:13.800195  687772 logs.go:282] 0 containers: []
	W1223 00:03:13.800223  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:13.800269  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:13.818836  687772 logs.go:282] 0 containers: []
	W1223 00:03:13.818861  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:13.818905  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:13.837265  687772 logs.go:282] 0 containers: []
	W1223 00:03:13.837293  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:13.837349  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:13.855911  687772 logs.go:282] 0 containers: []
	W1223 00:03:13.855934  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:13.855944  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:13.855963  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:13.877413  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:13.877442  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:13.932902  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:13.925709    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:13.926137    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:13.927723    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:13.928122    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:13.929665    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:13.925709    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:13.926137    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:13.927723    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:13.928122    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:13.929665    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:13.932922  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:13.932935  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:13.951430  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:13.951455  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:13.979434  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:13.979463  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:16.528395  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:16.539658  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:16.558721  687772 logs.go:282] 0 containers: []
	W1223 00:03:16.558746  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:16.558802  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:16.577097  687772 logs.go:282] 0 containers: []
	W1223 00:03:16.577122  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:16.577169  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:16.594944  687772 logs.go:282] 0 containers: []
	W1223 00:03:16.594973  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:16.595021  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:16.612956  687772 logs.go:282] 0 containers: []
	W1223 00:03:16.612982  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:16.613028  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:16.631601  687772 logs.go:282] 0 containers: []
	W1223 00:03:16.631626  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:16.631689  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:16.650054  687772 logs.go:282] 0 containers: []
	W1223 00:03:16.650077  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:16.650125  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:16.668847  687772 logs.go:282] 0 containers: []
	W1223 00:03:16.668868  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:16.668912  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:16.686862  687772 logs.go:282] 0 containers: []
	W1223 00:03:16.686892  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:16.686906  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:16.686923  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:16.743145  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:16.736059    8755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:16.736637    8755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:16.738192    8755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:16.738624    8755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:16.740098    8755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:16.736059    8755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:16.736637    8755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:16.738192    8755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:16.738624    8755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:16.740098    8755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:16.743166  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:16.743178  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:16.762565  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:16.762607  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:16.794528  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:16.794556  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:16.840343  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:16.840372  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:19.362509  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:19.374211  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:19.393192  687772 logs.go:282] 0 containers: []
	W1223 00:03:19.393216  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:19.393268  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:19.412437  687772 logs.go:282] 0 containers: []
	W1223 00:03:19.412465  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:19.412523  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:19.432373  687772 logs.go:282] 0 containers: []
	W1223 00:03:19.432401  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:19.432460  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:19.452125  687772 logs.go:282] 0 containers: []
	W1223 00:03:19.452159  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:19.452217  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:19.471301  687772 logs.go:282] 0 containers: []
	W1223 00:03:19.471328  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:19.471374  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:19.490544  687772 logs.go:282] 0 containers: []
	W1223 00:03:19.490571  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:19.490643  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:19.510487  687772 logs.go:282] 0 containers: []
	W1223 00:03:19.510508  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:19.510559  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:19.529060  687772 logs.go:282] 0 containers: []
	W1223 00:03:19.529084  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:19.529097  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:19.529112  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:19.574443  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:19.574473  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:19.594488  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:19.594517  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:19.649890  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:19.642446    8930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:19.643046    8930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:19.644619    8930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:19.645108    8930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:19.646648    8930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:19.642446    8930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:19.643046    8930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:19.644619    8930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:19.645108    8930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:19.646648    8930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:19.649910  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:19.649923  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:19.668626  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:19.668651  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:22.198480  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:22.210881  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:22.230438  687772 logs.go:282] 0 containers: []
	W1223 00:03:22.230462  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:22.230522  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:22.248861  687772 logs.go:282] 0 containers: []
	W1223 00:03:22.248882  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:22.248922  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:22.268466  687772 logs.go:282] 0 containers: []
	W1223 00:03:22.268499  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:22.268557  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:22.289199  687772 logs.go:282] 0 containers: []
	W1223 00:03:22.289223  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:22.289268  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:22.307380  687772 logs.go:282] 0 containers: []
	W1223 00:03:22.307405  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:22.307470  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:22.324678  687772 logs.go:282] 0 containers: []
	W1223 00:03:22.324704  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:22.324763  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:22.343704  687772 logs.go:282] 0 containers: []
	W1223 00:03:22.343736  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:22.343791  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:22.362087  687772 logs.go:282] 0 containers: []
	W1223 00:03:22.362117  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:22.362137  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:22.362150  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:22.409818  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:22.409877  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:22.430134  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:22.430165  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:22.485643  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:22.478699    9095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:22.479219    9095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:22.480800    9095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:22.481236    9095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:22.482717    9095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:22.478699    9095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:22.479219    9095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:22.480800    9095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:22.481236    9095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:22.482717    9095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:22.485664  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:22.485680  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:22.504121  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:22.504150  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:25.031881  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:25.043513  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:25.063145  687772 logs.go:282] 0 containers: []
	W1223 00:03:25.063167  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:25.063211  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:25.082000  687772 logs.go:282] 0 containers: []
	W1223 00:03:25.082025  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:25.082074  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:25.099962  687772 logs.go:282] 0 containers: []
	W1223 00:03:25.099984  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:25.100038  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:25.118454  687772 logs.go:282] 0 containers: []
	W1223 00:03:25.118479  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:25.118537  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:25.136993  687772 logs.go:282] 0 containers: []
	W1223 00:03:25.137020  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:25.137069  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:25.155902  687772 logs.go:282] 0 containers: []
	W1223 00:03:25.155925  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:25.155974  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:25.175659  687772 logs.go:282] 0 containers: []
	W1223 00:03:25.175683  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:25.175737  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:25.194139  687772 logs.go:282] 0 containers: []
	W1223 00:03:25.194167  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:25.194180  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:25.194193  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:25.240226  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:25.240258  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:25.261339  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:25.261367  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:25.320736  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:25.313498    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:25.314072    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:25.315614    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:25.316005    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:25.317501    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:25.313498    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:25.314072    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:25.315614    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:25.316005    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:25.317501    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:25.320756  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:25.320768  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:25.341035  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:25.341064  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:27.870845  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:27.882071  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:27.901298  687772 logs.go:282] 0 containers: []
	W1223 00:03:27.901323  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:27.901382  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:27.919859  687772 logs.go:282] 0 containers: []
	W1223 00:03:27.919880  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:27.919930  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:27.938496  687772 logs.go:282] 0 containers: []
	W1223 00:03:27.938520  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:27.938563  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:27.956888  687772 logs.go:282] 0 containers: []
	W1223 00:03:27.956916  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:27.956972  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:27.975342  687772 logs.go:282] 0 containers: []
	W1223 00:03:27.975362  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:27.975412  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:27.994015  687772 logs.go:282] 0 containers: []
	W1223 00:03:27.994038  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:27.994082  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:28.013037  687772 logs.go:282] 0 containers: []
	W1223 00:03:28.013065  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:28.013125  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:28.033210  687772 logs.go:282] 0 containers: []
	W1223 00:03:28.033234  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:28.033247  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:28.033262  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:28.078861  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:28.078892  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:28.098865  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:28.098890  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:28.154165  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:28.147100    9422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:28.147650    9422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:28.149204    9422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:28.149685    9422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:28.151156    9422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:28.147100    9422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:28.147650    9422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:28.149204    9422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:28.149685    9422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:28.151156    9422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:28.154185  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:28.154197  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:28.172425  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:28.172454  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:30.702937  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:30.714537  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:30.735323  687772 logs.go:282] 0 containers: []
	W1223 00:03:30.735346  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:30.735411  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:30.754342  687772 logs.go:282] 0 containers: []
	W1223 00:03:30.754364  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:30.754416  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:30.773486  687772 logs.go:282] 0 containers: []
	W1223 00:03:30.773513  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:30.773570  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:30.792473  687772 logs.go:282] 0 containers: []
	W1223 00:03:30.792498  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:30.792554  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:30.810955  687772 logs.go:282] 0 containers: []
	W1223 00:03:30.810981  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:30.811028  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:30.829795  687772 logs.go:282] 0 containers: []
	W1223 00:03:30.829816  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:30.829864  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:30.848939  687772 logs.go:282] 0 containers: []
	W1223 00:03:30.848959  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:30.849000  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:30.867397  687772 logs.go:282] 0 containers: []
	W1223 00:03:30.867423  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:30.867435  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:30.867452  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:30.887088  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:30.887116  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:30.942084  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:30.934885    9589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:30.935453    9589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:30.937022    9589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:30.937466    9589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:30.938978    9589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:30.934885    9589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:30.935453    9589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:30.937022    9589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:30.937466    9589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:30.938978    9589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:30.942116  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:30.942130  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:30.960703  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:30.960730  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:30.988334  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:30.988359  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:33.539710  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:33.551147  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:33.569876  687772 logs.go:282] 0 containers: []
	W1223 00:03:33.569899  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:33.569943  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:33.588678  687772 logs.go:282] 0 containers: []
	W1223 00:03:33.588710  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:33.588766  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:33.607229  687772 logs.go:282] 0 containers: []
	W1223 00:03:33.607251  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:33.607302  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:33.625442  687772 logs.go:282] 0 containers: []
	W1223 00:03:33.625466  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:33.625527  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:33.644308  687772 logs.go:282] 0 containers: []
	W1223 00:03:33.644340  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:33.644396  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:33.662684  687772 logs.go:282] 0 containers: []
	W1223 00:03:33.662717  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:33.662786  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:33.681135  687772 logs.go:282] 0 containers: []
	W1223 00:03:33.681161  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:33.681209  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:33.700016  687772 logs.go:282] 0 containers: []
	W1223 00:03:33.700042  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:33.700057  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:33.700070  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:33.718957  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:33.718985  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:33.747390  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:33.747417  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:33.793693  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:33.793722  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:33.815051  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:33.815076  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:33.869709  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:33.862833    9773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:33.863339    9773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:33.864945    9773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:33.865374    9773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:33.866900    9773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:33.862833    9773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:33.863339    9773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:33.864945    9773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:33.865374    9773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:33.866900    9773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:36.371365  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:36.383229  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:36.403744  687772 logs.go:282] 0 containers: []
	W1223 00:03:36.403771  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:36.403818  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:36.422087  687772 logs.go:282] 0 containers: []
	W1223 00:03:36.422109  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:36.422163  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:36.440967  687772 logs.go:282] 0 containers: []
	W1223 00:03:36.440989  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:36.441046  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:36.459110  687772 logs.go:282] 0 containers: []
	W1223 00:03:36.459137  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:36.459184  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:36.477754  687772 logs.go:282] 0 containers: []
	W1223 00:03:36.477781  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:36.477838  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:36.496775  687772 logs.go:282] 0 containers: []
	W1223 00:03:36.496803  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:36.496857  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:36.516542  687772 logs.go:282] 0 containers: []
	W1223 00:03:36.516577  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:36.516652  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:36.537692  687772 logs.go:282] 0 containers: []
	W1223 00:03:36.537720  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:36.537731  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:36.537744  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:36.585346  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:36.585376  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:36.605519  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:36.605545  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:36.660230  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:36.653151    9925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:36.653663    9925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:36.655147    9925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:36.655634    9925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:36.657120    9925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:36.653151    9925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:36.653663    9925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:36.655147    9925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:36.655634    9925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:36.657120    9925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:36.660253  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:36.660269  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:36.678368  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:36.678395  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:39.206672  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:39.218123  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:39.236299  687772 logs.go:282] 0 containers: []
	W1223 00:03:39.236322  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:39.236384  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:39.256168  687772 logs.go:282] 0 containers: []
	W1223 00:03:39.256194  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:39.256256  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:39.278907  687772 logs.go:282] 0 containers: []
	W1223 00:03:39.278934  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:39.278987  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:39.299685  687772 logs.go:282] 0 containers: []
	W1223 00:03:39.299712  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:39.299771  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:39.319824  687772 logs.go:282] 0 containers: []
	W1223 00:03:39.319847  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:39.319890  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:39.339314  687772 logs.go:282] 0 containers: []
	W1223 00:03:39.339340  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:39.339388  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:39.357097  687772 logs.go:282] 0 containers: []
	W1223 00:03:39.357122  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:39.357178  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:39.375484  687772 logs.go:282] 0 containers: []
	W1223 00:03:39.375506  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:39.375518  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:39.375528  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:39.422143  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:39.422171  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:39.442163  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:39.442190  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:39.499251  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:39.491681   10081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:39.492132   10081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:39.493822   10081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:39.494268   10081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:39.495815   10081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:39.491681   10081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:39.492132   10081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:39.493822   10081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:39.494268   10081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:39.495815   10081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:39.499300  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:39.499313  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:39.520555  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:39.520585  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:42.050334  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:42.062329  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:42.081392  687772 logs.go:282] 0 containers: []
	W1223 00:03:42.081414  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:42.081466  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:42.100032  687772 logs.go:282] 0 containers: []
	W1223 00:03:42.100060  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:42.100108  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:42.118667  687772 logs.go:282] 0 containers: []
	W1223 00:03:42.118701  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:42.118755  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:42.137260  687772 logs.go:282] 0 containers: []
	W1223 00:03:42.137280  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:42.137324  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:42.156202  687772 logs.go:282] 0 containers: []
	W1223 00:03:42.156223  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:42.156268  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:42.173781  687772 logs.go:282] 0 containers: []
	W1223 00:03:42.173805  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:42.173849  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:42.191802  687772 logs.go:282] 0 containers: []
	W1223 00:03:42.191823  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:42.191865  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:42.210403  687772 logs.go:282] 0 containers: []
	W1223 00:03:42.210428  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:42.210439  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:42.210451  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:42.257288  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:42.257324  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:42.279921  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:42.279950  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:42.335965  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:42.328933   10249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:42.329474   10249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:42.331040   10249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:42.331482   10249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:42.333076   10249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:42.328933   10249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:42.329474   10249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:42.331040   10249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:42.331482   10249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:42.333076   10249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:42.335989  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:42.336007  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:42.354691  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:42.354717  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:44.883238  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:44.894443  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:44.913117  687772 logs.go:282] 0 containers: []
	W1223 00:03:44.913141  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:44.913198  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:44.931401  687772 logs.go:282] 0 containers: []
	W1223 00:03:44.931426  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:44.931481  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:44.950195  687772 logs.go:282] 0 containers: []
	W1223 00:03:44.950223  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:44.950276  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:44.968485  687772 logs.go:282] 0 containers: []
	W1223 00:03:44.968511  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:44.968566  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:44.987148  687772 logs.go:282] 0 containers: []
	W1223 00:03:44.987171  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:44.987233  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:45.005624  687772 logs.go:282] 0 containers: []
	W1223 00:03:45.005646  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:45.005693  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:45.023699  687772 logs.go:282] 0 containers: []
	W1223 00:03:45.023724  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:45.023791  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:45.042874  687772 logs.go:282] 0 containers: []
	W1223 00:03:45.042892  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:45.042903  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:45.042913  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:45.091063  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:45.091090  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:45.111078  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:45.111104  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:45.165637  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:45.158773   10418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:45.159306   10418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:45.160855   10418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:45.161311   10418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:45.162829   10418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:45.158773   10418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:45.159306   10418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:45.160855   10418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:45.161311   10418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:45.162829   10418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:45.165664  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:45.165680  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:45.183805  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:45.183831  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:47.712691  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:47.724393  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:47.743118  687772 logs.go:282] 0 containers: []
	W1223 00:03:47.743145  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:47.743192  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:47.764020  687772 logs.go:282] 0 containers: []
	W1223 00:03:47.764047  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:47.764100  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:47.784950  687772 logs.go:282] 0 containers: []
	W1223 00:03:47.784979  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:47.785031  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:47.805130  687772 logs.go:282] 0 containers: []
	W1223 00:03:47.805153  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:47.805202  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:47.824818  687772 logs.go:282] 0 containers: []
	W1223 00:03:47.824840  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:47.824881  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:47.842122  687772 logs.go:282] 0 containers: []
	W1223 00:03:47.842142  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:47.842182  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:47.860107  687772 logs.go:282] 0 containers: []
	W1223 00:03:47.860126  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:47.860169  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:47.877957  687772 logs.go:282] 0 containers: []
	W1223 00:03:47.877981  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:47.877991  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:47.878003  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:47.913554  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:47.913583  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:47.959272  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:47.959301  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:47.979197  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:47.979224  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:48.034846  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:48.027499   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:48.028033   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:48.029566   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:48.030004   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:48.031523   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:48.027499   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:48.028033   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:48.029566   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:48.030004   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:48.031523   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:48.034864  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:48.034876  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:50.554653  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:50.565766  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:50.584506  687772 logs.go:282] 0 containers: []
	W1223 00:03:50.584527  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:50.584568  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:50.603087  687772 logs.go:282] 0 containers: []
	W1223 00:03:50.603112  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:50.603159  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:50.621694  687772 logs.go:282] 0 containers: []
	W1223 00:03:50.621718  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:50.621758  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:50.640855  687772 logs.go:282] 0 containers: []
	W1223 00:03:50.640882  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:50.640950  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:50.658573  687772 logs.go:282] 0 containers: []
	W1223 00:03:50.658615  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:50.658659  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:50.676703  687772 logs.go:282] 0 containers: []
	W1223 00:03:50.676725  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:50.676792  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:50.694997  687772 logs.go:282] 0 containers: []
	W1223 00:03:50.695020  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:50.695084  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:50.711361  687772 logs.go:282] 0 containers: []
	W1223 00:03:50.711382  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:50.711393  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:50.711405  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:50.739475  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:50.739500  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:50.789788  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:50.789828  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:50.810067  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:50.810096  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:50.864855  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:50.857771   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:50.858239   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:50.859923   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:50.860349   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:50.861907   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:50.857771   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:50.858239   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:50.859923   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:50.860349   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:50.861907   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:50.864881  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:50.864896  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:53.383457  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:53.394757  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:53.414248  687772 logs.go:282] 0 containers: []
	W1223 00:03:53.414277  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:53.414341  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:53.432950  687772 logs.go:282] 0 containers: []
	W1223 00:03:53.432970  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:53.433020  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:53.452058  687772 logs.go:282] 0 containers: []
	W1223 00:03:53.452081  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:53.452143  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:53.470670  687772 logs.go:282] 0 containers: []
	W1223 00:03:53.470698  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:53.470751  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:53.489416  687772 logs.go:282] 0 containers: []
	W1223 00:03:53.489443  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:53.489486  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:53.508963  687772 logs.go:282] 0 containers: []
	W1223 00:03:53.508995  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:53.509057  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:53.530683  687772 logs.go:282] 0 containers: []
	W1223 00:03:53.530710  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:53.530770  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:53.551545  687772 logs.go:282] 0 containers: []
	W1223 00:03:53.551577  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:53.551610  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:53.551627  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:53.570296  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:53.570324  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:53.598123  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:53.598154  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:53.646248  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:53.646280  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:53.666819  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:53.666844  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:53.722068  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:53.715109   10928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:53.715646   10928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:53.717149   10928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:53.717536   10928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:53.719006   10928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:53.715109   10928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:53.715646   10928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:53.717149   10928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:53.717536   10928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:53.719006   10928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:56.223706  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:56.235187  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:56.255491  687772 logs.go:282] 0 containers: []
	W1223 00:03:56.255511  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:56.255551  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:56.274455  687772 logs.go:282] 0 containers: []
	W1223 00:03:56.274479  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:56.274519  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:56.293621  687772 logs.go:282] 0 containers: []
	W1223 00:03:56.293648  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:56.293702  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:56.312485  687772 logs.go:282] 0 containers: []
	W1223 00:03:56.312511  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:56.312558  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:56.331239  687772 logs.go:282] 0 containers: []
	W1223 00:03:56.331266  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:56.331320  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:56.349793  687772 logs.go:282] 0 containers: []
	W1223 00:03:56.349813  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:56.349856  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:56.368378  687772 logs.go:282] 0 containers: []
	W1223 00:03:56.368397  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:56.368446  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:56.386706  687772 logs.go:282] 0 containers: []
	W1223 00:03:56.386730  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:56.386744  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:56.386759  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:56.435036  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:56.435067  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:56.456766  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:56.456793  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:56.515022  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:56.506534   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:56.507203   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:56.508885   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:56.509323   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:56.510920   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:56.506534   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:56.507203   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:56.508885   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:56.509323   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:56.510920   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:56.515044  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:56.515056  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:56.537382  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:56.537424  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:59.067413  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:59.078926  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:59.098458  687772 logs.go:282] 0 containers: []
	W1223 00:03:59.098490  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:59.098543  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:59.119074  687772 logs.go:282] 0 containers: []
	W1223 00:03:59.119100  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:59.119146  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:59.138014  687772 logs.go:282] 0 containers: []
	W1223 00:03:59.138036  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:59.138082  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:59.157367  687772 logs.go:282] 0 containers: []
	W1223 00:03:59.157390  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:59.157433  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:59.175923  687772 logs.go:282] 0 containers: []
	W1223 00:03:59.175950  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:59.176008  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:59.194211  687772 logs.go:282] 0 containers: []
	W1223 00:03:59.194243  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:59.194295  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:59.212980  687772 logs.go:282] 0 containers: []
	W1223 00:03:59.213004  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:59.213050  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:59.231233  687772 logs.go:282] 0 containers: []
	W1223 00:03:59.231255  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:59.231266  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:59.231277  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:59.260354  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:59.260377  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:59.307751  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:59.307784  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:59.327756  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:59.327782  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:59.382873  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:59.375811   11264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:59.376331   11264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:59.377895   11264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:59.378317   11264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:59.379902   11264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:59.375811   11264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:59.376331   11264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:59.377895   11264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:59.378317   11264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:59.379902   11264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:59.382895  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:59.382908  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:01.903304  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:01.914514  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:01.933300  687772 logs.go:282] 0 containers: []
	W1223 00:04:01.933328  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:01.933388  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:01.952153  687772 logs.go:282] 0 containers: []
	W1223 00:04:01.952181  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:01.952225  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:01.970903  687772 logs.go:282] 0 containers: []
	W1223 00:04:01.970933  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:01.970987  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:01.989493  687772 logs.go:282] 0 containers: []
	W1223 00:04:01.989513  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:01.989567  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:02.009114  687772 logs.go:282] 0 containers: []
	W1223 00:04:02.009141  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:02.009198  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:02.030277  687772 logs.go:282] 0 containers: []
	W1223 00:04:02.030310  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:02.030365  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:02.050466  687772 logs.go:282] 0 containers: []
	W1223 00:04:02.050492  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:02.050551  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:02.069917  687772 logs.go:282] 0 containers: []
	W1223 00:04:02.069941  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:02.069956  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:02.069970  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:02.115721  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:02.115750  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:02.135348  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:02.135373  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:02.190691  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:02.183688   11415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:02.184205   11415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:02.185799   11415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:02.186209   11415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:02.187682   11415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:02.183688   11415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:02.184205   11415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:02.185799   11415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:02.186209   11415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:02.187682   11415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:02.190712  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:02.190724  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:02.209097  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:02.209122  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:04.737357  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:04.748553  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:04.770341  687772 logs.go:282] 0 containers: []
	W1223 00:04:04.770369  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:04.770424  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:04.791137  687772 logs.go:282] 0 containers: []
	W1223 00:04:04.791165  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:04.791214  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:04.810520  687772 logs.go:282] 0 containers: []
	W1223 00:04:04.810541  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:04.810607  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:04.828972  687772 logs.go:282] 0 containers: []
	W1223 00:04:04.829000  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:04.829055  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:04.849074  687772 logs.go:282] 0 containers: []
	W1223 00:04:04.849096  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:04.849148  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:04.868041  687772 logs.go:282] 0 containers: []
	W1223 00:04:04.868063  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:04.868115  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:04.886481  687772 logs.go:282] 0 containers: []
	W1223 00:04:04.886504  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:04.886567  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:04.905235  687772 logs.go:282] 0 containers: []
	W1223 00:04:04.905262  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:04.905274  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:04.905285  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:04.953851  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:04.953880  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:04.973781  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:04.973806  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:05.031345  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:05.024020   11570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:05.024585   11570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:05.026291   11570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:05.026768   11570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:05.028137   11570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:05.024020   11570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:05.024585   11570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:05.026291   11570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:05.026768   11570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:05.028137   11570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:05.031368  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:05.031383  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:05.050812  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:05.050839  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:07.580204  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:07.592091  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:07.611238  687772 logs.go:282] 0 containers: []
	W1223 00:04:07.611267  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:07.611318  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:07.630713  687772 logs.go:282] 0 containers: []
	W1223 00:04:07.630736  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:07.630786  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:07.649511  687772 logs.go:282] 0 containers: []
	W1223 00:04:07.649541  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:07.649620  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:07.668236  687772 logs.go:282] 0 containers: []
	W1223 00:04:07.668264  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:07.668323  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:07.687077  687772 logs.go:282] 0 containers: []
	W1223 00:04:07.687101  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:07.687158  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:07.705952  687772 logs.go:282] 0 containers: []
	W1223 00:04:07.705982  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:07.706036  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:07.725156  687772 logs.go:282] 0 containers: []
	W1223 00:04:07.725178  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:07.725224  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:07.744024  687772 logs.go:282] 0 containers: []
	W1223 00:04:07.744049  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:07.744063  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:07.744079  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:07.797680  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:07.797721  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:07.819453  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:07.819481  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:07.875026  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:07.867909   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:07.868453   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:07.870037   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:07.870465   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:07.872022   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:07.867909   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:07.868453   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:07.870037   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:07.870465   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:07.872022   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:07.875046  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:07.875059  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:07.893942  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:07.893968  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:10.422234  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:10.433749  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:10.453027  687772 logs.go:282] 0 containers: []
	W1223 00:04:10.453049  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:10.453099  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:10.471766  687772 logs.go:282] 0 containers: []
	W1223 00:04:10.471789  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:10.471840  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:10.489960  687772 logs.go:282] 0 containers: []
	W1223 00:04:10.489981  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:10.490025  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:10.508537  687772 logs.go:282] 0 containers: []
	W1223 00:04:10.508558  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:10.508614  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:10.527336  687772 logs.go:282] 0 containers: []
	W1223 00:04:10.527362  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:10.527418  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:10.545995  687772 logs.go:282] 0 containers: []
	W1223 00:04:10.546019  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:10.546074  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:10.564167  687772 logs.go:282] 0 containers: []
	W1223 00:04:10.564196  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:10.564254  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:10.582919  687772 logs.go:282] 0 containers: []
	W1223 00:04:10.582947  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:10.582961  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:10.582974  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:10.630969  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:10.631004  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:10.651161  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:10.651197  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:10.709000  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:10.701750   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:10.702399   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:10.703967   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:10.704377   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:10.705928   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:10.701750   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:10.702399   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:10.703967   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:10.704377   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:10.705928   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:10.709026  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:10.709041  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:10.728175  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:10.728203  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:13.258812  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:13.271437  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:13.293437  687772 logs.go:282] 0 containers: []
	W1223 00:04:13.293468  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:13.293525  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:13.313483  687772 logs.go:282] 0 containers: []
	W1223 00:04:13.313508  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:13.313568  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:13.333612  687772 logs.go:282] 0 containers: []
	W1223 00:04:13.333643  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:13.333709  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:13.353086  687772 logs.go:282] 0 containers: []
	W1223 00:04:13.353111  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:13.353169  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:13.372208  687772 logs.go:282] 0 containers: []
	W1223 00:04:13.372230  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:13.372275  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:13.391431  687772 logs.go:282] 0 containers: []
	W1223 00:04:13.391457  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:13.391507  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:13.410402  687772 logs.go:282] 0 containers: []
	W1223 00:04:13.410434  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:13.410502  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:13.428653  687772 logs.go:282] 0 containers: []
	W1223 00:04:13.428675  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:13.428687  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:13.428709  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:13.474690  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:13.474729  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:13.495426  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:13.495457  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:13.550790  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:13.543422   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:13.544009   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:13.545544   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:13.546130   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:13.547692   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:13.543422   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:13.544009   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:13.545544   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:13.546130   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:13.547692   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:13.550810  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:13.550822  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:13.569370  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:13.569397  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:16.099133  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:16.110484  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:16.129712  687772 logs.go:282] 0 containers: []
	W1223 00:04:16.129743  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:16.129808  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:16.147785  687772 logs.go:282] 0 containers: []
	W1223 00:04:16.147808  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:16.147854  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:16.167259  687772 logs.go:282] 0 containers: []
	W1223 00:04:16.167284  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:16.167333  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:16.186151  687772 logs.go:282] 0 containers: []
	W1223 00:04:16.186178  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:16.186223  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:16.206074  687772 logs.go:282] 0 containers: []
	W1223 00:04:16.206099  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:16.206154  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:16.225296  687772 logs.go:282] 0 containers: []
	W1223 00:04:16.225319  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:16.225369  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:16.244091  687772 logs.go:282] 0 containers: []
	W1223 00:04:16.244115  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:16.244160  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:16.263620  687772 logs.go:282] 0 containers: []
	W1223 00:04:16.263643  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:16.263655  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:16.263667  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:16.323241  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:16.316239   12235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:16.316726   12235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:16.318256   12235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:16.318676   12235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:16.319891   12235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:16.316239   12235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:16.316726   12235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:16.318256   12235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:16.318676   12235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:16.319891   12235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:16.323265  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:16.323281  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:16.342320  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:16.342346  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:16.371156  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:16.371183  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:16.421158  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:16.421188  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:18.942795  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:18.954257  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:18.974190  687772 logs.go:282] 0 containers: []
	W1223 00:04:18.974217  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:18.974270  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:18.993178  687772 logs.go:282] 0 containers: []
	W1223 00:04:18.993200  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:18.993245  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:19.013377  687772 logs.go:282] 0 containers: []
	W1223 00:04:19.013405  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:19.013465  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:19.034917  687772 logs.go:282] 0 containers: []
	W1223 00:04:19.034941  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:19.034990  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:19.054247  687772 logs.go:282] 0 containers: []
	W1223 00:04:19.054271  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:19.054326  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:19.072206  687772 logs.go:282] 0 containers: []
	W1223 00:04:19.072235  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:19.072297  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:19.091855  687772 logs.go:282] 0 containers: []
	W1223 00:04:19.091882  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:19.091933  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:19.111067  687772 logs.go:282] 0 containers: []
	W1223 00:04:19.111100  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:19.111114  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:19.111127  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:19.161923  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:19.161955  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:19.182679  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:19.182708  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:19.239475  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:19.232458   12393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:19.233037   12393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:19.234582   12393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:19.234997   12393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:19.236569   12393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:19.232458   12393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:19.233037   12393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:19.234582   12393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:19.234997   12393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:19.236569   12393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:19.239503  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:19.239521  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:19.259046  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:19.259075  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:21.799246  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:21.810742  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:21.830826  687772 logs.go:282] 0 containers: []
	W1223 00:04:21.830852  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:21.830896  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:21.849427  687772 logs.go:282] 0 containers: []
	W1223 00:04:21.849455  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:21.849501  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:21.867823  687772 logs.go:282] 0 containers: []
	W1223 00:04:21.867847  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:21.867891  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:21.886431  687772 logs.go:282] 0 containers: []
	W1223 00:04:21.886452  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:21.886508  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:21.905079  687772 logs.go:282] 0 containers: []
	W1223 00:04:21.905103  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:21.905160  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:21.923344  687772 logs.go:282] 0 containers: []
	W1223 00:04:21.923365  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:21.923407  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:21.941945  687772 logs.go:282] 0 containers: []
	W1223 00:04:21.941966  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:21.942012  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:21.959749  687772 logs.go:282] 0 containers: []
	W1223 00:04:21.959773  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:21.959785  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:21.959795  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:21.979750  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:21.979776  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:22.008278  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:22.008301  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:22.059988  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:22.060022  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:22.080174  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:22.080201  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:22.135625  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:22.128550   12578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:22.129064   12578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:22.130551   12578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:22.130965   12578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:22.132436   12578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:22.128550   12578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:22.129064   12578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:22.130551   12578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:22.130965   12578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:22.132436   12578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:24.636526  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:24.647769  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:24.666800  687772 logs.go:282] 0 containers: []
	W1223 00:04:24.666823  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:24.666873  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:24.685078  687772 logs.go:282] 0 containers: []
	W1223 00:04:24.685100  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:24.685153  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:24.703219  687772 logs.go:282] 0 containers: []
	W1223 00:04:24.703238  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:24.703287  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:24.721619  687772 logs.go:282] 0 containers: []
	W1223 00:04:24.721647  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:24.721705  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:24.740548  687772 logs.go:282] 0 containers: []
	W1223 00:04:24.740570  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:24.740632  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:24.758544  687772 logs.go:282] 0 containers: []
	W1223 00:04:24.758568  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:24.758633  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:24.776285  687772 logs.go:282] 0 containers: []
	W1223 00:04:24.776317  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:24.776445  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:24.794360  687772 logs.go:282] 0 containers: []
	W1223 00:04:24.794386  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:24.794399  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:24.794413  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:24.840111  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:24.840142  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:24.860260  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:24.860286  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:24.915702  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:24.908230   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:24.908821   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:24.910346   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:24.910801   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:24.912322   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:24.908230   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:24.908821   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:24.910346   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:24.910801   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:24.912322   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:24.915723  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:24.915736  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:24.934368  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:24.934394  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:27.463653  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:27.474997  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:27.494098  687772 logs.go:282] 0 containers: []
	W1223 00:04:27.494127  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:27.494183  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:27.513771  687772 logs.go:282] 0 containers: []
	W1223 00:04:27.513799  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:27.513855  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:27.534688  687772 logs.go:282] 0 containers: []
	W1223 00:04:27.534720  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:27.534777  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:27.553043  687772 logs.go:282] 0 containers: []
	W1223 00:04:27.553065  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:27.553115  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:27.571979  687772 logs.go:282] 0 containers: []
	W1223 00:04:27.572005  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:27.572049  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:27.590357  687772 logs.go:282] 0 containers: []
	W1223 00:04:27.590376  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:27.590419  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:27.609465  687772 logs.go:282] 0 containers: []
	W1223 00:04:27.609490  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:27.609547  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:27.628214  687772 logs.go:282] 0 containers: []
	W1223 00:04:27.628238  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:27.628253  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:27.628267  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:27.646519  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:27.646545  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:27.674935  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:27.674958  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:27.721277  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:27.721306  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:27.741140  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:27.741165  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:27.796676  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:27.789709   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:27.790246   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:27.791825   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:27.792273   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:27.793752   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:27.789709   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:27.790246   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:27.791825   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:27.792273   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:27.793752   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:30.297779  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:30.308987  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:30.327806  687772 logs.go:282] 0 containers: []
	W1223 00:04:30.327827  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:30.327885  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:30.347142  687772 logs.go:282] 0 containers: []
	W1223 00:04:30.347165  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:30.347216  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:30.365629  687772 logs.go:282] 0 containers: []
	W1223 00:04:30.365656  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:30.365729  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:30.383470  687772 logs.go:282] 0 containers: []
	W1223 00:04:30.383496  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:30.383552  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:30.402127  687772 logs.go:282] 0 containers: []
	W1223 00:04:30.402152  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:30.402214  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:30.420681  687772 logs.go:282] 0 containers: []
	W1223 00:04:30.420706  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:30.420757  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:30.439453  687772 logs.go:282] 0 containers: []
	W1223 00:04:30.439475  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:30.439517  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:30.458669  687772 logs.go:282] 0 containers: []
	W1223 00:04:30.458691  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:30.458702  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:30.458713  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:30.505022  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:30.505050  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:30.528295  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:30.528323  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:30.585055  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:30.577823   13062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:30.578469   13062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:30.580017   13062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:30.580458   13062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:30.582038   13062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:30.577823   13062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:30.578469   13062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:30.580017   13062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:30.580458   13062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:30.582038   13062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:30.585076  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:30.585088  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:30.604200  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:30.604229  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:33.131779  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:33.143670  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:33.163179  687772 logs.go:282] 0 containers: []
	W1223 00:04:33.163200  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:33.163245  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:33.182970  687772 logs.go:282] 0 containers: []
	W1223 00:04:33.182992  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:33.183043  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:33.201569  687772 logs.go:282] 0 containers: []
	W1223 00:04:33.201609  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:33.201656  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:33.219907  687772 logs.go:282] 0 containers: []
	W1223 00:04:33.219931  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:33.219989  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:33.239604  687772 logs.go:282] 0 containers: []
	W1223 00:04:33.239630  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:33.239675  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:33.258182  687772 logs.go:282] 0 containers: []
	W1223 00:04:33.258211  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:33.258263  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:33.277606  687772 logs.go:282] 0 containers: []
	W1223 00:04:33.277632  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:33.277678  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:33.297258  687772 logs.go:282] 0 containers: []
	W1223 00:04:33.297283  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:33.297296  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:33.297312  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:33.344903  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:33.344932  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:33.364742  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:33.364768  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:33.420528  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:33.413527   13221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:33.414059   13221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:33.415546   13221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:33.416007   13221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:33.417495   13221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:33.413527   13221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:33.414059   13221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:33.415546   13221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:33.416007   13221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:33.417495   13221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:33.420549  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:33.420560  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:33.439384  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:33.439411  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:35.968903  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:35.980276  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:35.999444  687772 logs.go:282] 0 containers: []
	W1223 00:04:35.999474  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:35.999534  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:36.018792  687772 logs.go:282] 0 containers: []
	W1223 00:04:36.018819  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:36.018880  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:36.036956  687772 logs.go:282] 0 containers: []
	W1223 00:04:36.036985  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:36.037043  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:36.055239  687772 logs.go:282] 0 containers: []
	W1223 00:04:36.055265  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:36.055315  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:36.073241  687772 logs.go:282] 0 containers: []
	W1223 00:04:36.073272  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:36.073325  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:36.091575  687772 logs.go:282] 0 containers: []
	W1223 00:04:36.091613  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:36.091662  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:36.110369  687772 logs.go:282] 0 containers: []
	W1223 00:04:36.110396  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:36.110448  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:36.128481  687772 logs.go:282] 0 containers: []
	W1223 00:04:36.128505  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:36.128516  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:36.128526  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:36.176492  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:36.176526  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:36.196649  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:36.196675  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:36.253201  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:36.245327   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:36.245908   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:36.247446   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:36.247880   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:36.249674   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:36.245327   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:36.245908   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:36.247446   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:36.247880   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:36.249674   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:36.253224  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:36.253241  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:36.273351  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:36.273379  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:38.804411  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:38.815899  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:38.834644  687772 logs.go:282] 0 containers: []
	W1223 00:04:38.834668  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:38.834713  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:38.853892  687772 logs.go:282] 0 containers: []
	W1223 00:04:38.853919  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:38.853967  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:38.871484  687772 logs.go:282] 0 containers: []
	W1223 00:04:38.871505  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:38.871554  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:38.889803  687772 logs.go:282] 0 containers: []
	W1223 00:04:38.889828  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:38.889879  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:38.909558  687772 logs.go:282] 0 containers: []
	W1223 00:04:38.909586  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:38.909652  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:38.929528  687772 logs.go:282] 0 containers: []
	W1223 00:04:38.929553  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:38.929624  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:38.948153  687772 logs.go:282] 0 containers: []
	W1223 00:04:38.948181  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:38.948241  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:38.966657  687772 logs.go:282] 0 containers: []
	W1223 00:04:38.966679  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:38.966689  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:38.966711  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:38.994610  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:38.994637  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:39.040694  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:39.040722  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:39.060391  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:39.060417  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:39.116169  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:39.108908   13569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:39.109405   13569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:39.111037   13569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:39.111517   13569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:39.113022   13569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:39.108908   13569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:39.109405   13569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:39.111037   13569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:39.111517   13569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:39.113022   13569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:39.116189  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:39.116201  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:41.638009  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:41.650427  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:41.670214  687772 logs.go:282] 0 containers: []
	W1223 00:04:41.670241  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:41.670289  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:41.689539  687772 logs.go:282] 0 containers: []
	W1223 00:04:41.689568  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:41.689651  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:41.708449  687772 logs.go:282] 0 containers: []
	W1223 00:04:41.708472  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:41.708520  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:41.727897  687772 logs.go:282] 0 containers: []
	W1223 00:04:41.727918  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:41.727963  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:41.748169  687772 logs.go:282] 0 containers: []
	W1223 00:04:41.748200  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:41.748252  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:41.767148  687772 logs.go:282] 0 containers: []
	W1223 00:04:41.767172  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:41.767224  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:41.789562  687772 logs.go:282] 0 containers: []
	W1223 00:04:41.789589  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:41.789665  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:41.808259  687772 logs.go:282] 0 containers: []
	W1223 00:04:41.808281  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:41.808292  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:41.808304  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:41.827093  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:41.827120  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:41.854644  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:41.854671  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:41.901960  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:41.901995  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:41.921983  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:41.922011  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:41.978723  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:41.971457   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:41.971976   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:41.973486   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:41.973968   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:41.975679   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:41.971457   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:41.971976   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:41.973486   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:41.973968   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:41.975679   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:44.479583  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:44.491055  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:44.513749  687772 logs.go:282] 0 containers: []
	W1223 00:04:44.513779  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:44.513836  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:44.535619  687772 logs.go:282] 0 containers: []
	W1223 00:04:44.535648  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:44.535722  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:44.555441  687772 logs.go:282] 0 containers: []
	W1223 00:04:44.555464  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:44.555512  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:44.574828  687772 logs.go:282] 0 containers: []
	W1223 00:04:44.574851  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:44.574895  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:44.593270  687772 logs.go:282] 0 containers: []
	W1223 00:04:44.593293  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:44.593350  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:44.612157  687772 logs.go:282] 0 containers: []
	W1223 00:04:44.612182  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:44.612239  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:44.630342  687772 logs.go:282] 0 containers: []
	W1223 00:04:44.630366  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:44.630417  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:44.648864  687772 logs.go:282] 0 containers: []
	W1223 00:04:44.648893  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:44.648905  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:44.648917  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:44.698462  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:44.698494  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:44.718432  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:44.718463  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:44.777738  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:44.767938   13879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:44.768520   13879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:44.770129   13879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:44.770658   13879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:44.773585   13879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:44.767938   13879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:44.768520   13879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:44.770129   13879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:44.770658   13879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:44.773585   13879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:44.777764  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:44.777781  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:44.798488  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:44.798522  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:47.328787  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:47.340091  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:47.359764  687772 logs.go:282] 0 containers: []
	W1223 00:04:47.359786  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:47.359834  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:47.378531  687772 logs.go:282] 0 containers: []
	W1223 00:04:47.378557  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:47.378633  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:47.397279  687772 logs.go:282] 0 containers: []
	W1223 00:04:47.397303  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:47.397351  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:47.415379  687772 logs.go:282] 0 containers: []
	W1223 00:04:47.415404  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:47.415449  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:47.433342  687772 logs.go:282] 0 containers: []
	W1223 00:04:47.433363  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:47.433407  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:47.452134  687772 logs.go:282] 0 containers: []
	W1223 00:04:47.452153  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:47.452195  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:47.470492  687772 logs.go:282] 0 containers: []
	W1223 00:04:47.470514  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:47.470565  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:47.489435  687772 logs.go:282] 0 containers: []
	W1223 00:04:47.489462  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:47.489475  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:47.489490  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:47.543310  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:47.543341  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:47.563678  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:47.563716  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:47.618877  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:47.611492   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:47.612043   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:47.613658   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:47.614136   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:47.615686   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:47.611492   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:47.612043   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:47.613658   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:47.614136   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:47.615686   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:47.618902  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:47.618916  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:47.637117  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:47.637142  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:50.165288  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:50.176485  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:50.195504  687772 logs.go:282] 0 containers: []
	W1223 00:04:50.195530  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:50.195573  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:50.214411  687772 logs.go:282] 0 containers: []
	W1223 00:04:50.214435  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:50.214486  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:50.232050  687772 logs.go:282] 0 containers: []
	W1223 00:04:50.232073  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:50.232113  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:50.249723  687772 logs.go:282] 0 containers: []
	W1223 00:04:50.249747  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:50.249805  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:50.269197  687772 logs.go:282] 0 containers: []
	W1223 00:04:50.269220  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:50.269262  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:50.287018  687772 logs.go:282] 0 containers: []
	W1223 00:04:50.287042  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:50.287084  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:50.304852  687772 logs.go:282] 0 containers: []
	W1223 00:04:50.304876  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:50.304923  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:50.323126  687772 logs.go:282] 0 containers: []
	W1223 00:04:50.323150  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:50.323164  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:50.323177  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:50.371303  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:50.371328  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:50.391396  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:50.391419  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:50.446479  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:50.439351   14214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:50.440054   14214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:50.441636   14214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:50.442091   14214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:50.443655   14214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:50.439351   14214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:50.440054   14214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:50.441636   14214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:50.442091   14214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:50.443655   14214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:50.446503  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:50.446519  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:50.466869  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:50.466895  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:53.004783  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:53.016488  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:53.037102  687772 logs.go:282] 0 containers: []
	W1223 00:04:53.037130  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:53.037175  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:53.056487  687772 logs.go:282] 0 containers: []
	W1223 00:04:53.056509  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:53.056551  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:53.074919  687772 logs.go:282] 0 containers: []
	W1223 00:04:53.074938  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:53.074983  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:53.093142  687772 logs.go:282] 0 containers: []
	W1223 00:04:53.093163  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:53.093203  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:53.112007  687772 logs.go:282] 0 containers: []
	W1223 00:04:53.112030  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:53.112079  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:53.130737  687772 logs.go:282] 0 containers: []
	W1223 00:04:53.130759  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:53.130802  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:53.149980  687772 logs.go:282] 0 containers: []
	W1223 00:04:53.150009  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:53.150057  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:53.167468  687772 logs.go:282] 0 containers: []
	W1223 00:04:53.167493  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:53.167503  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:53.167513  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:53.195775  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:53.195800  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:53.243212  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:53.243238  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:53.263047  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:53.263073  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:53.319009  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:53.311761   14403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:53.312309   14403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:53.313970   14403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:53.314449   14403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:53.315934   14403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:53.311761   14403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:53.312309   14403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:53.313970   14403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:53.314449   14403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:53.315934   14403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:53.319029  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:53.319041  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:55.838963  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:55.850169  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:55.868811  687772 logs.go:282] 0 containers: []
	W1223 00:04:55.868833  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:55.868878  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:55.887281  687772 logs.go:282] 0 containers: []
	W1223 00:04:55.887309  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:55.887361  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:55.905343  687772 logs.go:282] 0 containers: []
	W1223 00:04:55.905372  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:55.905425  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:55.922787  687772 logs.go:282] 0 containers: []
	W1223 00:04:55.922811  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:55.922858  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:55.941063  687772 logs.go:282] 0 containers: []
	W1223 00:04:55.941090  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:55.941143  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:55.960388  687772 logs.go:282] 0 containers: []
	W1223 00:04:55.960413  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:55.960549  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:55.978787  687772 logs.go:282] 0 containers: []
	W1223 00:04:55.978810  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:55.978854  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:55.996489  687772 logs.go:282] 0 containers: []
	W1223 00:04:55.996516  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:55.996530  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:55.996542  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:56.048197  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:56.048229  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:56.068640  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:56.068668  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:56.124436  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:56.117357   14557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:56.117926   14557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:56.119480   14557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:56.119940   14557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:56.121489   14557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:56.117357   14557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:56.117926   14557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:56.119480   14557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:56.119940   14557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:56.121489   14557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:56.124461  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:56.124478  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:56.143079  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:56.143102  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:58.672032  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:58.683539  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:58.702739  687772 logs.go:282] 0 containers: []
	W1223 00:04:58.702762  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:58.702814  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:58.721434  687772 logs.go:282] 0 containers: []
	W1223 00:04:58.721465  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:58.721514  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:58.741740  687772 logs.go:282] 0 containers: []
	W1223 00:04:58.741768  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:58.741811  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:58.760960  687772 logs.go:282] 0 containers: []
	W1223 00:04:58.760982  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:58.761035  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:58.780979  687772 logs.go:282] 0 containers: []
	W1223 00:04:58.781001  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:58.781045  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:58.799417  687772 logs.go:282] 0 containers: []
	W1223 00:04:58.799453  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:58.799501  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:58.817985  687772 logs.go:282] 0 containers: []
	W1223 00:04:58.818007  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:58.818051  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:58.837633  687772 logs.go:282] 0 containers: []
	W1223 00:04:58.837659  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:58.837671  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:58.837683  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:58.856421  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:58.856448  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:58.883550  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:58.883574  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:58.932130  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:58.932158  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:58.953160  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:58.953189  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:59.009951  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:59.002105   14731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:59.002736   14731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:59.004318   14731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:59.004809   14731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:59.006430   14731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:59.002105   14731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:59.002736   14731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:59.004318   14731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:59.004809   14731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:59.006430   14731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:01.512529  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:01.523921  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:01.542499  687772 logs.go:282] 0 containers: []
	W1223 00:05:01.542525  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:01.542569  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:01.560824  687772 logs.go:282] 0 containers: []
	W1223 00:05:01.560850  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:01.560892  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:01.578994  687772 logs.go:282] 0 containers: []
	W1223 00:05:01.579017  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:01.579060  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:01.597267  687772 logs.go:282] 0 containers: []
	W1223 00:05:01.597293  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:01.597346  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:01.615860  687772 logs.go:282] 0 containers: []
	W1223 00:05:01.615880  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:01.615919  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:01.635022  687772 logs.go:282] 0 containers: []
	W1223 00:05:01.635045  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:01.635084  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:01.654257  687772 logs.go:282] 0 containers: []
	W1223 00:05:01.654282  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:01.654338  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:01.672470  687772 logs.go:282] 0 containers: []
	W1223 00:05:01.672492  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:01.672502  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:01.672513  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:01.720496  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:01.720525  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:01.740698  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:01.740724  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:01.800538  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:01.793437   14884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:01.794074   14884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:01.795170   14884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:01.795640   14884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:01.797188   14884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:01.793437   14884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:01.794074   14884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:01.795170   14884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:01.795640   14884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:01.797188   14884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:01.800562  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:01.800579  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:01.820265  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:01.820291  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:04.348938  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:04.360190  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:04.379095  687772 logs.go:282] 0 containers: []
	W1223 00:05:04.379124  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:04.379177  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:04.396991  687772 logs.go:282] 0 containers: []
	W1223 00:05:04.397012  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:04.397057  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:04.415658  687772 logs.go:282] 0 containers: []
	W1223 00:05:04.415682  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:04.415750  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:04.434023  687772 logs.go:282] 0 containers: []
	W1223 00:05:04.434049  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:04.434093  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:04.452721  687772 logs.go:282] 0 containers: []
	W1223 00:05:04.452744  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:04.452791  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:04.471221  687772 logs.go:282] 0 containers: []
	W1223 00:05:04.471247  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:04.471294  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:04.489656  687772 logs.go:282] 0 containers: []
	W1223 00:05:04.489685  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:04.489734  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:04.508637  687772 logs.go:282] 0 containers: []
	W1223 00:05:04.508669  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:04.508689  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:04.508702  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:04.526928  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:04.526953  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:04.553896  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:04.553923  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:04.602972  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:04.602999  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:04.622788  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:04.622812  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:04.678232  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:04.670559   15068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:04.671188   15068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:04.672874   15068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:04.673311   15068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:04.675032   15068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:04.670559   15068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:04.671188   15068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:04.672874   15068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:04.673311   15068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:04.675032   15068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:07.179923  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:07.191963  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:07.211239  687772 logs.go:282] 0 containers: []
	W1223 00:05:07.211263  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:07.211304  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:07.230281  687772 logs.go:282] 0 containers: []
	W1223 00:05:07.230302  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:07.230343  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:07.249365  687772 logs.go:282] 0 containers: []
	W1223 00:05:07.249391  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:07.249443  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:07.269410  687772 logs.go:282] 0 containers: []
	W1223 00:05:07.269431  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:07.269484  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:07.288681  687772 logs.go:282] 0 containers: []
	W1223 00:05:07.288711  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:07.288756  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:07.307722  687772 logs.go:282] 0 containers: []
	W1223 00:05:07.307742  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:07.307785  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:07.324479  687772 logs.go:282] 0 containers: []
	W1223 00:05:07.324503  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:07.324557  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:07.343010  687772 logs.go:282] 0 containers: []
	W1223 00:05:07.343030  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:07.343041  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:07.343056  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:07.370090  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:07.370116  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:07.416268  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:07.416294  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:07.436063  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:07.436088  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:07.492624  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:07.485207   15232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:07.485853   15232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:07.487473   15232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:07.488003   15232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:07.489566   15232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:07.485207   15232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:07.485853   15232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:07.487473   15232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:07.488003   15232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:07.489566   15232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:07.492650  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:07.492667  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:10.011735  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:10.025412  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:10.046816  687772 logs.go:282] 0 containers: []
	W1223 00:05:10.046848  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:10.046917  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:10.065664  687772 logs.go:282] 0 containers: []
	W1223 00:05:10.065693  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:10.065752  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:10.084486  687772 logs.go:282] 0 containers: []
	W1223 00:05:10.084512  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:10.084569  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:10.103489  687772 logs.go:282] 0 containers: []
	W1223 00:05:10.103510  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:10.103563  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:10.121383  687772 logs.go:282] 0 containers: []
	W1223 00:05:10.121413  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:10.121457  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:10.139817  687772 logs.go:282] 0 containers: []
	W1223 00:05:10.139840  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:10.139883  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:10.158123  687772 logs.go:282] 0 containers: []
	W1223 00:05:10.158142  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:10.158195  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:10.176690  687772 logs.go:282] 0 containers: []
	W1223 00:05:10.176714  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:10.176728  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:10.176743  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:10.221786  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:10.221818  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:10.241642  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:10.241670  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:10.306092  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:10.298846   15375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:10.299321   15375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:10.300928   15375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:10.301392   15375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:10.302935   15375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:10.298846   15375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:10.299321   15375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:10.300928   15375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:10.301392   15375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:10.302935   15375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:10.306110  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:10.306122  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:10.325227  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:10.325254  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:12.853199  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:12.864559  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:12.883528  687772 logs.go:282] 0 containers: []
	W1223 00:05:12.883553  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:12.883615  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:12.901914  687772 logs.go:282] 0 containers: []
	W1223 00:05:12.901946  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:12.902003  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:12.920676  687772 logs.go:282] 0 containers: []
	W1223 00:05:12.920703  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:12.920746  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:12.938812  687772 logs.go:282] 0 containers: []
	W1223 00:05:12.938840  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:12.938898  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:12.956564  687772 logs.go:282] 0 containers: []
	W1223 00:05:12.956588  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:12.956651  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:12.975030  687772 logs.go:282] 0 containers: []
	W1223 00:05:12.975056  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:12.975112  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:12.992748  687772 logs.go:282] 0 containers: []
	W1223 00:05:12.992770  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:12.992819  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:13.013710  687772 logs.go:282] 0 containers: []
	W1223 00:05:13.013733  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:13.013744  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:13.013756  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:13.044889  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:13.044920  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:13.090565  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:13.090611  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:13.110578  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:13.110614  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:13.166048  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:13.158806   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:13.159417   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:13.161011   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:13.161480   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:13.163044   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:13.158806   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:13.159417   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:13.161011   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:13.161480   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:13.163044   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:13.166066  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:13.166079  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:15.685941  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:15.697434  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:15.716560  687772 logs.go:282] 0 containers: []
	W1223 00:05:15.716607  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:15.716664  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:15.735775  687772 logs.go:282] 0 containers: []
	W1223 00:05:15.735799  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:15.735847  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:15.753974  687772 logs.go:282] 0 containers: []
	W1223 00:05:15.753996  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:15.754046  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:15.771763  687772 logs.go:282] 0 containers: []
	W1223 00:05:15.771788  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:15.771846  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:15.790222  687772 logs.go:282] 0 containers: []
	W1223 00:05:15.790249  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:15.790294  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:15.808671  687772 logs.go:282] 0 containers: []
	W1223 00:05:15.808691  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:15.808735  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:15.827295  687772 logs.go:282] 0 containers: []
	W1223 00:05:15.827324  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:15.827377  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:15.845637  687772 logs.go:282] 0 containers: []
	W1223 00:05:15.845658  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:15.845668  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:15.845679  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:15.892975  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:15.893004  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:15.912599  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:15.912626  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:15.967763  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:15.960925   15710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:15.961478   15710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:15.963006   15710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:15.963401   15710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:15.964895   15710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:15.960925   15710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:15.961478   15710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:15.963006   15710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:15.963401   15710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:15.964895   15710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:15.967788  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:15.967801  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:15.986603  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:15.986632  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:18.516732  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:18.529415  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:18.549048  687772 logs.go:282] 0 containers: []
	W1223 00:05:18.549069  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:18.549113  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:18.567672  687772 logs.go:282] 0 containers: []
	W1223 00:05:18.567705  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:18.567771  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:18.586513  687772 logs.go:282] 0 containers: []
	W1223 00:05:18.586538  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:18.586613  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:18.604518  687772 logs.go:282] 0 containers: []
	W1223 00:05:18.604538  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:18.604579  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:18.623446  687772 logs.go:282] 0 containers: []
	W1223 00:05:18.623467  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:18.623510  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:18.642213  687772 logs.go:282] 0 containers: []
	W1223 00:05:18.642230  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:18.642279  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:18.660501  687772 logs.go:282] 0 containers: []
	W1223 00:05:18.660521  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:18.660563  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:18.678846  687772 logs.go:282] 0 containers: []
	W1223 00:05:18.678869  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:18.678882  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:18.678893  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:18.727936  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:18.727965  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:18.749033  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:18.749059  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:18.804351  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:18.796992   15875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:18.797493   15875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:18.799074   15875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:18.799516   15875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:18.801045   15875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:18.796992   15875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:18.797493   15875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:18.799074   15875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:18.799516   15875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:18.801045   15875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:18.804386  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:18.804401  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:18.822650  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:18.822681  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:21.351938  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:21.363094  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:21.382091  687772 logs.go:282] 0 containers: []
	W1223 00:05:21.382123  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:21.382179  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:21.400790  687772 logs.go:282] 0 containers: []
	W1223 00:05:21.400813  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:21.400861  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:21.418989  687772 logs.go:282] 0 containers: []
	W1223 00:05:21.419014  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:21.419060  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:21.437814  687772 logs.go:282] 0 containers: []
	W1223 00:05:21.437839  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:21.437898  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:21.456967  687772 logs.go:282] 0 containers: []
	W1223 00:05:21.456991  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:21.457045  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:21.475541  687772 logs.go:282] 0 containers: []
	W1223 00:05:21.475566  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:21.475644  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:21.494493  687772 logs.go:282] 0 containers: []
	W1223 00:05:21.494518  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:21.494576  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:21.513952  687772 logs.go:282] 0 containers: []
	W1223 00:05:21.513979  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:21.513990  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:21.514001  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:21.563253  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:21.563283  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:21.583663  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:21.583693  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:21.638754  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:21.631703   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:21.632235   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:21.633835   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:21.634263   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:21.635800   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:21.631703   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:21.632235   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:21.633835   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:21.634263   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:21.635800   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:21.638774  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:21.638786  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:21.657674  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:21.657704  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:24.188905  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:24.200277  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:24.220108  687772 logs.go:282] 0 containers: []
	W1223 00:05:24.220133  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:24.220188  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:24.240286  687772 logs.go:282] 0 containers: []
	W1223 00:05:24.240307  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:24.240351  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:24.260644  687772 logs.go:282] 0 containers: []
	W1223 00:05:24.260670  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:24.260724  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:24.282918  687772 logs.go:282] 0 containers: []
	W1223 00:05:24.282943  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:24.282990  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:24.302929  687772 logs.go:282] 0 containers: []
	W1223 00:05:24.302956  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:24.303013  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:24.322124  687772 logs.go:282] 0 containers: []
	W1223 00:05:24.322145  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:24.322196  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:24.340965  687772 logs.go:282] 0 containers: []
	W1223 00:05:24.340993  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:24.341050  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:24.360121  687772 logs.go:282] 0 containers: []
	W1223 00:05:24.360148  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:24.360162  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:24.360177  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:24.406776  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:24.406809  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:24.428882  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:24.428909  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:24.484257  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:24.477184   16205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:24.477734   16205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:24.479261   16205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:24.479752   16205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:24.481241   16205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:24.477184   16205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:24.477734   16205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:24.479261   16205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:24.479752   16205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:24.481241   16205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:24.484286  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:24.484304  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:24.504724  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:24.504752  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:27.038561  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:27.050259  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:27.069265  687772 logs.go:282] 0 containers: []
	W1223 00:05:27.069288  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:27.069333  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:27.088081  687772 logs.go:282] 0 containers: []
	W1223 00:05:27.088108  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:27.088171  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:27.107172  687772 logs.go:282] 0 containers: []
	W1223 00:05:27.107198  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:27.107246  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:27.125773  687772 logs.go:282] 0 containers: []
	W1223 00:05:27.125804  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:27.125862  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:27.144259  687772 logs.go:282] 0 containers: []
	W1223 00:05:27.144282  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:27.144339  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:27.163197  687772 logs.go:282] 0 containers: []
	W1223 00:05:27.163217  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:27.163263  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:27.181942  687772 logs.go:282] 0 containers: []
	W1223 00:05:27.181971  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:27.182030  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:27.199936  687772 logs.go:282] 0 containers: []
	W1223 00:05:27.199964  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:27.199980  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:27.199996  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:27.218431  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:27.218456  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:27.246756  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:27.246783  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:27.297557  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:27.297603  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:27.318177  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:27.318205  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:27.374968  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:27.367760   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:27.368359   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:27.369972   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:27.370370   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:27.371924   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:27.367760   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:27.368359   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:27.369972   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:27.370370   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:27.371924   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:29.875712  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:29.887100  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:29.906809  687772 logs.go:282] 0 containers: []
	W1223 00:05:29.906834  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:29.906892  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:29.926388  687772 logs.go:282] 0 containers: []
	W1223 00:05:29.926414  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:29.926467  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:29.946220  687772 logs.go:282] 0 containers: []
	W1223 00:05:29.946248  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:29.946302  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:29.967102  687772 logs.go:282] 0 containers: []
	W1223 00:05:29.967131  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:29.967188  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:29.986540  687772 logs.go:282] 0 containers: []
	W1223 00:05:29.986564  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:29.986631  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:30.004809  687772 logs.go:282] 0 containers: []
	W1223 00:05:30.004835  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:30.004881  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:30.023625  687772 logs.go:282] 0 containers: []
	W1223 00:05:30.023655  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:30.023711  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:30.042067  687772 logs.go:282] 0 containers: []
	W1223 00:05:30.042089  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:30.042100  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:30.042120  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:30.061885  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:30.061913  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:30.090401  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:30.090432  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:30.138962  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:30.138993  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:30.159224  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:30.159250  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:30.216295  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:30.208699   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:30.209372   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:30.211074   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:30.211516   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:30.213098   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:30.208699   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:30.209372   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:30.211074   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:30.211516   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:30.213098   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:32.716974  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:32.728432  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:32.748217  687772 logs.go:282] 0 containers: []
	W1223 00:05:32.748245  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:32.748292  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:32.767866  687772 logs.go:282] 0 containers: []
	W1223 00:05:32.767887  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:32.767935  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:32.788690  687772 logs.go:282] 0 containers: []
	W1223 00:05:32.788723  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:32.788782  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:32.808366  687772 logs.go:282] 0 containers: []
	W1223 00:05:32.808397  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:32.808460  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:32.827631  687772 logs.go:282] 0 containers: []
	W1223 00:05:32.827655  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:32.827714  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:32.846429  687772 logs.go:282] 0 containers: []
	W1223 00:05:32.846456  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:32.846511  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:32.865177  687772 logs.go:282] 0 containers: []
	W1223 00:05:32.865202  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:32.865258  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:32.885235  687772 logs.go:282] 0 containers: []
	W1223 00:05:32.885258  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:32.885268  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:32.885280  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:32.905218  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:32.905245  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:32.960860  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:32.953652   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:32.954228   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:32.955802   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:32.956269   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:32.957894   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:32.953652   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:32.954228   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:32.955802   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:32.956269   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:32.957894   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:32.960885  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:32.960905  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:32.979917  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:32.979943  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:33.008187  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:33.008218  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:35.555359  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:35.566888  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:35.586562  687772 logs.go:282] 0 containers: []
	W1223 00:05:35.586588  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:35.586657  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:35.605495  687772 logs.go:282] 0 containers: []
	W1223 00:05:35.605522  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:35.605579  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:35.624671  687772 logs.go:282] 0 containers: []
	W1223 00:05:35.624700  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:35.624760  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:35.643198  687772 logs.go:282] 0 containers: []
	W1223 00:05:35.643222  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:35.643278  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:35.662223  687772 logs.go:282] 0 containers: []
	W1223 00:05:35.662245  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:35.662290  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:35.681991  687772 logs.go:282] 0 containers: []
	W1223 00:05:35.682016  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:35.682071  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:35.700985  687772 logs.go:282] 0 containers: []
	W1223 00:05:35.701009  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:35.701062  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:35.719976  687772 logs.go:282] 0 containers: []
	W1223 00:05:35.720000  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:35.720015  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:35.720029  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:35.767694  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:35.767728  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:35.792896  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:35.792935  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:35.849448  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:35.842971   16872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:35.843476   16872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:35.845024   16872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:35.845404   16872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:35.846511   16872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:35.842971   16872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:35.843476   16872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:35.845024   16872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:35.845404   16872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:35.846511   16872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:35.849470  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:35.849491  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:35.868248  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:35.868274  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:38.397175  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:38.408856  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:38.428054  687772 logs.go:282] 0 containers: []
	W1223 00:05:38.428085  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:38.428141  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:38.447350  687772 logs.go:282] 0 containers: []
	W1223 00:05:38.447376  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:38.447428  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:38.466426  687772 logs.go:282] 0 containers: []
	W1223 00:05:38.466455  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:38.466512  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:38.486074  687772 logs.go:282] 0 containers: []
	W1223 00:05:38.486104  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:38.486173  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:38.505584  687772 logs.go:282] 0 containers: []
	W1223 00:05:38.505626  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:38.505709  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:38.527387  687772 logs.go:282] 0 containers: []
	W1223 00:05:38.527416  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:38.527473  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:38.547928  687772 logs.go:282] 0 containers: []
	W1223 00:05:38.547955  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:38.548015  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:38.568237  687772 logs.go:282] 0 containers: []
	W1223 00:05:38.568262  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:38.568274  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:38.568285  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:38.616522  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:38.616555  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:38.638676  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:38.638707  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:38.694984  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:38.687773   17030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:38.688337   17030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:38.689839   17030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:38.690288   17030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:38.691876   17030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:38.687773   17030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:38.688337   17030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:38.689839   17030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:38.690288   17030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:38.691876   17030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:38.695006  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:38.695019  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:38.713940  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:38.713969  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:41.244859  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:41.256283  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:41.275201  687772 logs.go:282] 0 containers: []
	W1223 00:05:41.275233  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:41.275280  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:41.295272  687772 logs.go:282] 0 containers: []
	W1223 00:05:41.295299  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:41.295353  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:41.313039  687772 logs.go:282] 0 containers: []
	W1223 00:05:41.313069  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:41.313135  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:41.331394  687772 logs.go:282] 0 containers: []
	W1223 00:05:41.331418  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:41.331491  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:41.350556  687772 logs.go:282] 0 containers: []
	W1223 00:05:41.350583  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:41.350650  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:41.369215  687772 logs.go:282] 0 containers: []
	W1223 00:05:41.369242  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:41.369290  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:41.387799  687772 logs.go:282] 0 containers: []
	W1223 00:05:41.387826  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:41.387877  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:41.406760  687772 logs.go:282] 0 containers: []
	W1223 00:05:41.406785  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:41.406799  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:41.406813  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:41.453518  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:41.453548  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:41.473671  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:41.473700  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:41.531098  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:41.523365   17203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:41.523912   17203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:41.525536   17203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:41.526073   17203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:41.527560   17203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:41.523365   17203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:41.523912   17203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:41.525536   17203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:41.526073   17203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:41.527560   17203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:41.531124  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:41.531139  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:41.551968  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:41.551997  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:44.081115  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:44.092382  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:44.111299  687772 logs.go:282] 0 containers: []
	W1223 00:05:44.111326  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:44.111381  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:44.130168  687772 logs.go:282] 0 containers: []
	W1223 00:05:44.130196  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:44.130250  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:44.149028  687772 logs.go:282] 0 containers: []
	W1223 00:05:44.149052  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:44.149109  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:44.167326  687772 logs.go:282] 0 containers: []
	W1223 00:05:44.167346  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:44.167388  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:44.185875  687772 logs.go:282] 0 containers: []
	W1223 00:05:44.185898  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:44.185949  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:44.205297  687772 logs.go:282] 0 containers: []
	W1223 00:05:44.205320  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:44.205370  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:44.224561  687772 logs.go:282] 0 containers: []
	W1223 00:05:44.224608  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:44.224661  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:44.242760  687772 logs.go:282] 0 containers: []
	W1223 00:05:44.242782  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:44.242795  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:44.242808  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:44.290363  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:44.290399  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:44.310780  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:44.310806  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:44.367913  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:44.360501   17368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:44.361124   17368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:44.362755   17368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:44.363237   17368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:44.364761   17368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:44.360501   17368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:44.361124   17368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:44.362755   17368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:44.363237   17368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:44.364761   17368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:44.367931  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:44.367945  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:44.387052  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:44.387080  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:46.916305  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:46.927926  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:46.946856  687772 logs.go:282] 0 containers: []
	W1223 00:05:46.946882  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:46.946941  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:46.965651  687772 logs.go:282] 0 containers: []
	W1223 00:05:46.965674  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:46.965720  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:46.984835  687772 logs.go:282] 0 containers: []
	W1223 00:05:46.984863  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:46.984920  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:47.005005  687772 logs.go:282] 0 containers: []
	W1223 00:05:47.005033  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:47.005095  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:47.026916  687772 logs.go:282] 0 containers: []
	W1223 00:05:47.026948  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:47.026996  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:47.047971  687772 logs.go:282] 0 containers: []
	W1223 00:05:47.048003  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:47.048064  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:47.067344  687772 logs.go:282] 0 containers: []
	W1223 00:05:47.067372  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:47.067424  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:47.087055  687772 logs.go:282] 0 containers: []
	W1223 00:05:47.087079  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:47.087093  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:47.087107  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:47.134052  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:47.134085  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:47.154446  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:47.154479  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:47.210710  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:47.203541   17534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:47.204152   17534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:47.205769   17534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:47.206170   17534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:47.207683   17534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:47.203541   17534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:47.204152   17534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:47.205769   17534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:47.206170   17534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:47.207683   17534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:47.210734  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:47.210746  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:47.230988  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:47.231017  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:49.759465  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:49.771325  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:49.791131  687772 logs.go:282] 0 containers: []
	W1223 00:05:49.791160  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:49.791219  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:49.810792  687772 logs.go:282] 0 containers: []
	W1223 00:05:49.810814  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:49.810859  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:49.829432  687772 logs.go:282] 0 containers: []
	W1223 00:05:49.829454  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:49.829499  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:49.847527  687772 logs.go:282] 0 containers: []
	W1223 00:05:49.847548  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:49.847603  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:49.866252  687772 logs.go:282] 0 containers: []
	W1223 00:05:49.866275  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:49.866315  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:49.885934  687772 logs.go:282] 0 containers: []
	W1223 00:05:49.885955  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:49.885996  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:49.903668  687772 logs.go:282] 0 containers: []
	W1223 00:05:49.903690  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:49.903733  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:49.923276  687772 logs.go:282] 0 containers: []
	W1223 00:05:49.923298  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:49.923309  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:49.923320  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:49.968185  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:49.968217  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:49.988993  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:49.989021  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:50.052060  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:50.045040   17695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:50.045626   17695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:50.047194   17695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:50.047655   17695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:50.049139   17695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:50.045040   17695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:50.045626   17695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:50.047194   17695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:50.047655   17695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:50.049139   17695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:50.052083  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:50.052100  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:50.070860  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:50.070885  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:52.599679  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:52.611289  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:52.629699  687772 logs.go:282] 0 containers: []
	W1223 00:05:52.629724  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:52.629782  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:52.648660  687772 logs.go:282] 0 containers: []
	W1223 00:05:52.648689  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:52.648740  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:52.667204  687772 logs.go:282] 0 containers: []
	W1223 00:05:52.667232  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:52.667287  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:52.685635  687772 logs.go:282] 0 containers: []
	W1223 00:05:52.685667  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:52.685718  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:52.703669  687772 logs.go:282] 0 containers: []
	W1223 00:05:52.703692  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:52.703742  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:52.721467  687772 logs.go:282] 0 containers: []
	W1223 00:05:52.721495  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:52.721553  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:52.739858  687772 logs.go:282] 0 containers: []
	W1223 00:05:52.739885  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:52.739930  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:52.759123  687772 logs.go:282] 0 containers: []
	W1223 00:05:52.759151  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:52.759165  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:52.759178  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:52.812520  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:52.812552  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:52.832551  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:52.832578  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:52.887680  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:52.880327   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:52.880960   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:52.882578   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:52.883148   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:52.884700   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:52.880327   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:52.880960   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:52.882578   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:52.883148   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:52.884700   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:52.887700  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:52.887719  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:52.906246  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:52.906276  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:55.444344  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:55.455763  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:55.475305  687772 logs.go:282] 0 containers: []
	W1223 00:05:55.475332  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:55.475389  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:55.494094  687772 logs.go:282] 0 containers: []
	W1223 00:05:55.494117  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:55.494164  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:55.511874  687772 logs.go:282] 0 containers: []
	W1223 00:05:55.511896  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:55.511942  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:55.530088  687772 logs.go:282] 0 containers: []
	W1223 00:05:55.530113  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:55.530159  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:55.548749  687772 logs.go:282] 0 containers: []
	W1223 00:05:55.548778  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:55.548828  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:55.567179  687772 logs.go:282] 0 containers: []
	W1223 00:05:55.567204  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:55.567269  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:55.586315  687772 logs.go:282] 0 containers: []
	W1223 00:05:55.586343  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:55.586395  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:55.605282  687772 logs.go:282] 0 containers: []
	W1223 00:05:55.605303  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:55.605314  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:55.605327  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:55.624085  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:55.624113  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:55.652038  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:55.652065  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:55.699247  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:55.699274  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:55.719031  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:55.719058  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:55.777078  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:55.769272   18055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:55.769828   18055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:55.771491   18055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:55.771926   18055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:55.773469   18055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:55.769272   18055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:55.769828   18055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:55.771491   18055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:55.771926   18055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:55.773469   18055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:58.278708  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:58.291024  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:58.310944  687772 logs.go:282] 0 containers: []
	W1223 00:05:58.310971  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:58.311027  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:58.329419  687772 logs.go:282] 0 containers: []
	W1223 00:05:58.329443  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:58.329499  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:58.346556  687772 logs.go:282] 0 containers: []
	W1223 00:05:58.346579  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:58.346653  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:58.364565  687772 logs.go:282] 0 containers: []
	W1223 00:05:58.364601  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:58.364653  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:58.383020  687772 logs.go:282] 0 containers: []
	W1223 00:05:58.383043  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:58.383089  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:58.401354  687772 logs.go:282] 0 containers: []
	W1223 00:05:58.401381  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:58.401440  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:58.419356  687772 logs.go:282] 0 containers: []
	W1223 00:05:58.419377  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:58.419426  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:58.438428  687772 logs.go:282] 0 containers: []
	W1223 00:05:58.438449  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:58.438461  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:58.438477  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:58.458325  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:58.458353  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:58.513127  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:58.506001   18204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:58.506523   18204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:58.508086   18204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:58.508549   18204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:58.510071   18204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:58.506001   18204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:58.506523   18204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:58.508086   18204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:58.508549   18204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:58.510071   18204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:58.513156  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:58.513173  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:58.532159  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:58.532183  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:58.559409  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:58.559433  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:01.105933  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:01.117378  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:01.136395  687772 logs.go:282] 0 containers: []
	W1223 00:06:01.136418  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:01.136463  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:01.155037  687772 logs.go:282] 0 containers: []
	W1223 00:06:01.155063  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:01.155111  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:01.173939  687772 logs.go:282] 0 containers: []
	W1223 00:06:01.173960  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:01.174004  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:01.193250  687772 logs.go:282] 0 containers: []
	W1223 00:06:01.193271  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:01.193312  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:01.210927  687772 logs.go:282] 0 containers: []
	W1223 00:06:01.210948  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:01.210990  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:01.229293  687772 logs.go:282] 0 containers: []
	W1223 00:06:01.229319  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:01.229367  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:01.247971  687772 logs.go:282] 0 containers: []
	W1223 00:06:01.247997  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:01.248059  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:01.267642  687772 logs.go:282] 0 containers: []
	W1223 00:06:01.267667  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:01.267688  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:01.267718  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:01.290552  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:01.290581  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:01.346096  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:01.339164   18374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:01.339647   18374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:01.341218   18374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:01.341667   18374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:01.343153   18374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:01.339164   18374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:01.339647   18374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:01.341218   18374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:01.341667   18374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:01.343153   18374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:01.346115  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:01.346127  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:01.364490  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:01.364516  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:01.391895  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:01.391918  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:03.938979  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:03.950393  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:03.969334  687772 logs.go:282] 0 containers: []
	W1223 00:06:03.969364  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:03.969448  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:03.988183  687772 logs.go:282] 0 containers: []
	W1223 00:06:03.988205  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:03.988252  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:04.007742  687772 logs.go:282] 0 containers: []
	W1223 00:06:04.007767  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:04.007821  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:04.027502  687772 logs.go:282] 0 containers: []
	W1223 00:06:04.027528  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:04.027582  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:04.048194  687772 logs.go:282] 0 containers: []
	W1223 00:06:04.048222  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:04.048286  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:04.067020  687772 logs.go:282] 0 containers: []
	W1223 00:06:04.067044  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:04.067096  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:04.085747  687772 logs.go:282] 0 containers: []
	W1223 00:06:04.085776  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:04.085829  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:04.103906  687772 logs.go:282] 0 containers: []
	W1223 00:06:04.103936  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:04.103950  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:04.103963  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:04.131404  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:04.131427  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:04.178862  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:04.178893  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:04.198797  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:04.198823  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:04.255150  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:04.247324   18547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:04.247911   18547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:04.249519   18547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:04.249945   18547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:04.251469   18547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:04.247324   18547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:04.247911   18547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:04.249519   18547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:04.249945   18547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:04.251469   18547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:04.255174  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:04.255190  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:06.777149  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:06.788444  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:06.807818  687772 logs.go:282] 0 containers: []
	W1223 00:06:06.807839  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:06.807881  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:06.827018  687772 logs.go:282] 0 containers: []
	W1223 00:06:06.827044  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:06.827092  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:06.845320  687772 logs.go:282] 0 containers: []
	W1223 00:06:06.845342  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:06.845395  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:06.862837  687772 logs.go:282] 0 containers: []
	W1223 00:06:06.862856  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:06.862907  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:06.880629  687772 logs.go:282] 0 containers: []
	W1223 00:06:06.880649  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:06.880690  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:06.898665  687772 logs.go:282] 0 containers: []
	W1223 00:06:06.898694  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:06.898762  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:06.916571  687772 logs.go:282] 0 containers: []
	W1223 00:06:06.916606  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:06.916662  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:06.934190  687772 logs.go:282] 0 containers: []
	W1223 00:06:06.934213  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:06.934228  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:06.934245  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:06.961869  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:06.961895  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:07.008426  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:07.008460  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:07.033602  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:07.033641  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:07.089432  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:07.082227   18715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:07.082831   18715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:07.084421   18715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:07.084867   18715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:07.086345   18715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:07.082227   18715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:07.082831   18715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:07.084421   18715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:07.084867   18715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:07.086345   18715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:07.089452  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:07.089463  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:09.608089  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:09.619510  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:09.638402  687772 logs.go:282] 0 containers: []
	W1223 00:06:09.638426  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:09.638473  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:09.657218  687772 logs.go:282] 0 containers: []
	W1223 00:06:09.657247  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:09.657292  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:09.675838  687772 logs.go:282] 0 containers: []
	W1223 00:06:09.675871  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:09.675935  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:09.694913  687772 logs.go:282] 0 containers: []
	W1223 00:06:09.694939  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:09.694992  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:09.714024  687772 logs.go:282] 0 containers: []
	W1223 00:06:09.714046  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:09.714097  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:09.733120  687772 logs.go:282] 0 containers: []
	W1223 00:06:09.733142  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:09.733188  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:09.752081  687772 logs.go:282] 0 containers: []
	W1223 00:06:09.752104  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:09.752148  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:09.770630  687772 logs.go:282] 0 containers: []
	W1223 00:06:09.770661  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:09.770676  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:09.770700  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:09.818931  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:09.818967  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:09.839282  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:09.839309  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:09.895206  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:09.888285   18867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:09.888810   18867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:09.890309   18867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:09.890779   18867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:09.891942   18867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:09.888285   18867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:09.888810   18867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:09.890309   18867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:09.890779   18867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:09.891942   18867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:09.895234  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:09.895247  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:09.913965  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:09.913994  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:12.442178  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:12.453355  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:12.472243  687772 logs.go:282] 0 containers: []
	W1223 00:06:12.472267  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:12.472312  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:12.491113  687772 logs.go:282] 0 containers: []
	W1223 00:06:12.491136  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:12.491192  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:12.511291  687772 logs.go:282] 0 containers: []
	W1223 00:06:12.511317  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:12.511376  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:12.532112  687772 logs.go:282] 0 containers: []
	W1223 00:06:12.532141  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:12.532196  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:12.551226  687772 logs.go:282] 0 containers: []
	W1223 00:06:12.551250  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:12.551293  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:12.569426  687772 logs.go:282] 0 containers: []
	W1223 00:06:12.569449  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:12.569504  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:12.588494  687772 logs.go:282] 0 containers: []
	W1223 00:06:12.588520  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:12.588569  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:12.606610  687772 logs.go:282] 0 containers: []
	W1223 00:06:12.606644  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:12.606657  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:12.606674  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:12.634113  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:12.634143  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:12.681112  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:12.681140  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:12.700711  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:12.700736  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:12.757239  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:12.749780   19051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:12.750485   19051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:12.752070   19051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:12.752530   19051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:12.754079   19051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:12.749780   19051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:12.750485   19051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:12.752070   19051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:12.752530   19051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:12.754079   19051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:12.757259  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:12.757273  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:15.278124  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:15.290283  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:15.309406  687772 logs.go:282] 0 containers: []
	W1223 00:06:15.309433  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:15.309481  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:15.328093  687772 logs.go:282] 0 containers: []
	W1223 00:06:15.328119  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:15.328173  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:15.346922  687772 logs.go:282] 0 containers: []
	W1223 00:06:15.346949  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:15.347006  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:15.364932  687772 logs.go:282] 0 containers: []
	W1223 00:06:15.364960  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:15.365013  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:15.383120  687772 logs.go:282] 0 containers: []
	W1223 00:06:15.383144  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:15.383188  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:15.401332  687772 logs.go:282] 0 containers: []
	W1223 00:06:15.401355  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:15.401404  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:15.419961  687772 logs.go:282] 0 containers: []
	W1223 00:06:15.419986  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:15.420037  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:15.438746  687772 logs.go:282] 0 containers: []
	W1223 00:06:15.438769  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:15.438780  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:15.438793  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:15.486016  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:15.486044  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:15.506911  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:15.506939  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:15.566808  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:15.559320   19201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:15.559937   19201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:15.561686   19201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:15.562103   19201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:15.563650   19201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:15.559320   19201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:15.559937   19201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:15.561686   19201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:15.562103   19201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:15.563650   19201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:15.566826  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:15.566836  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:15.586013  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:15.586040  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:18.115753  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:18.127221  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:18.146018  687772 logs.go:282] 0 containers: []
	W1223 00:06:18.146048  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:18.146094  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:18.165274  687772 logs.go:282] 0 containers: []
	W1223 00:06:18.165294  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:18.165337  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:18.183880  687772 logs.go:282] 0 containers: []
	W1223 00:06:18.183904  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:18.183947  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:18.202061  687772 logs.go:282] 0 containers: []
	W1223 00:06:18.202082  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:18.202130  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:18.219858  687772 logs.go:282] 0 containers: []
	W1223 00:06:18.219892  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:18.219945  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:18.238966  687772 logs.go:282] 0 containers: []
	W1223 00:06:18.238987  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:18.239032  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:18.260921  687772 logs.go:282] 0 containers: []
	W1223 00:06:18.260949  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:18.260997  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:18.280705  687772 logs.go:282] 0 containers: []
	W1223 00:06:18.280735  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:18.280750  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:18.280764  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:18.299732  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:18.299756  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:18.327603  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:18.327631  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:18.375722  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:18.375749  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:18.397572  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:18.397611  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:18.454135  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:18.447039   19376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:18.447614   19376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:18.449142   19376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:18.449559   19376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:18.451077   19376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:18.447039   19376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:18.447614   19376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:18.449142   19376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:18.449559   19376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:18.451077   19376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:20.955833  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:20.967309  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:20.986237  687772 logs.go:282] 0 containers: []
	W1223 00:06:20.986258  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:20.986301  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:21.004350  687772 logs.go:282] 0 containers: []
	W1223 00:06:21.004377  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:21.004434  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:21.022893  687772 logs.go:282] 0 containers: []
	W1223 00:06:21.022919  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:21.022974  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:21.042421  687772 logs.go:282] 0 containers: []
	W1223 00:06:21.042441  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:21.042484  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:21.061267  687772 logs.go:282] 0 containers: []
	W1223 00:06:21.061293  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:21.061355  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:21.079988  687772 logs.go:282] 0 containers: []
	W1223 00:06:21.080011  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:21.080064  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:21.098196  687772 logs.go:282] 0 containers: []
	W1223 00:06:21.098225  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:21.098279  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:21.117158  687772 logs.go:282] 0 containers: []
	W1223 00:06:21.117180  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:21.117191  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:21.117202  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:21.146189  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:21.146215  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:21.192645  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:21.192677  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:21.212689  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:21.212716  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:21.269438  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:21.261783   19545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:21.262320   19545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:21.263990   19545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:21.264498   19545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:21.266022   19545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:21.261783   19545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:21.262320   19545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:21.263990   19545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:21.264498   19545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:21.266022   19545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:21.269462  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:21.269480  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:23.789716  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:23.801130  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:23.820155  687772 logs.go:282] 0 containers: []
	W1223 00:06:23.820180  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:23.820239  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:23.838850  687772 logs.go:282] 0 containers: []
	W1223 00:06:23.838875  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:23.838919  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:23.856860  687772 logs.go:282] 0 containers: []
	W1223 00:06:23.856881  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:23.856931  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:23.874630  687772 logs.go:282] 0 containers: []
	W1223 00:06:23.874653  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:23.874700  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:23.893425  687772 logs.go:282] 0 containers: []
	W1223 00:06:23.893454  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:23.893521  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:23.912712  687772 logs.go:282] 0 containers: []
	W1223 00:06:23.912734  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:23.912789  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:23.931097  687772 logs.go:282] 0 containers: []
	W1223 00:06:23.931124  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:23.931178  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:23.949113  687772 logs.go:282] 0 containers: []
	W1223 00:06:23.949138  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:23.949152  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:23.949168  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:23.996109  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:23.996137  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:24.016228  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:24.016254  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:24.071647  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:24.064286   19696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:24.064800   19696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:24.066333   19696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:24.066786   19696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:24.068314   19696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:24.064286   19696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:24.064800   19696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:24.066333   19696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:24.066786   19696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:24.068314   19696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:24.071665  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:24.071680  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:24.090918  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:24.090944  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:26.624354  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:26.635840  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:26.654444  687772 logs.go:282] 0 containers: []
	W1223 00:06:26.654473  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:26.654537  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:26.673364  687772 logs.go:282] 0 containers: []
	W1223 00:06:26.673388  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:26.673436  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:26.692467  687772 logs.go:282] 0 containers: []
	W1223 00:06:26.692489  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:26.692539  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:26.711627  687772 logs.go:282] 0 containers: []
	W1223 00:06:26.711656  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:26.711709  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:26.730302  687772 logs.go:282] 0 containers: []
	W1223 00:06:26.730332  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:26.730386  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:26.748910  687772 logs.go:282] 0 containers: []
	W1223 00:06:26.748939  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:26.748995  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:26.768525  687772 logs.go:282] 0 containers: []
	W1223 00:06:26.768548  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:26.768603  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:26.788434  687772 logs.go:282] 0 containers: []
	W1223 00:06:26.788462  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:26.788476  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:26.788491  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:26.845463  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:26.838499   19858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:26.838989   19858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:26.840494   19858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:26.840922   19858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:26.842389   19858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:26.838499   19858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:26.838989   19858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:26.840494   19858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:26.840922   19858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:26.842389   19858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:26.845482  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:26.845494  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:26.864140  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:26.864167  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:26.890448  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:26.890476  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:26.937390  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:26.937422  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:29.457766  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:29.469205  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:29.488353  687772 logs.go:282] 0 containers: []
	W1223 00:06:29.488376  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:29.488431  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:29.508035  687772 logs.go:282] 0 containers: []
	W1223 00:06:29.508059  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:29.508114  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:29.528210  687772 logs.go:282] 0 containers: []
	W1223 00:06:29.528234  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:29.528280  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:29.546344  687772 logs.go:282] 0 containers: []
	W1223 00:06:29.546370  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:29.546432  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:29.565125  687772 logs.go:282] 0 containers: []
	W1223 00:06:29.565153  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:29.565200  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:29.584111  687772 logs.go:282] 0 containers: []
	W1223 00:06:29.584142  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:29.584195  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:29.602714  687772 logs.go:282] 0 containers: []
	W1223 00:06:29.602735  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:29.602778  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:29.621012  687772 logs.go:282] 0 containers: []
	W1223 00:06:29.621042  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:29.621058  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:29.621073  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:29.669132  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:29.669168  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:29.689406  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:29.689431  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:29.746681  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:29.739833   20022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:29.740362   20022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:29.741927   20022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:29.742376   20022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:29.743569   20022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:29.739833   20022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:29.740362   20022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:29.741927   20022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:29.742376   20022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:29.743569   20022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:29.746703  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:29.746720  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:29.765762  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:29.765793  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:32.299443  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:32.310848  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:32.330298  687772 logs.go:282] 0 containers: []
	W1223 00:06:32.330326  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:32.330380  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:32.349664  687772 logs.go:282] 0 containers: []
	W1223 00:06:32.349692  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:32.349745  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:32.367944  687772 logs.go:282] 0 containers: []
	W1223 00:06:32.367969  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:32.368081  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:32.386919  687772 logs.go:282] 0 containers: []
	W1223 00:06:32.386940  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:32.386983  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:32.405416  687772 logs.go:282] 0 containers: []
	W1223 00:06:32.405440  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:32.405487  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:32.423080  687772 logs.go:282] 0 containers: []
	W1223 00:06:32.423100  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:32.423144  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:32.441255  687772 logs.go:282] 0 containers: []
	W1223 00:06:32.441282  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:32.441336  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:32.459763  687772 logs.go:282] 0 containers: []
	W1223 00:06:32.459789  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:32.459801  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:32.459812  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:32.507284  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:32.507314  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:32.529983  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:32.530014  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:32.587816  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:32.580635   20192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:32.581177   20192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:32.582743   20192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:32.583222   20192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:32.584764   20192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:32.580635   20192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:32.581177   20192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:32.582743   20192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:32.583222   20192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:32.584764   20192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:32.587843  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:32.587860  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:32.607796  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:32.607826  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:35.136489  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:35.147976  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:35.166774  687772 logs.go:282] 0 containers: []
	W1223 00:06:35.166794  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:35.166846  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:35.185872  687772 logs.go:282] 0 containers: []
	W1223 00:06:35.185899  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:35.185949  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:35.204053  687772 logs.go:282] 0 containers: []
	W1223 00:06:35.204074  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:35.204115  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:35.223056  687772 logs.go:282] 0 containers: []
	W1223 00:06:35.223077  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:35.223126  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:35.241616  687772 logs.go:282] 0 containers: []
	W1223 00:06:35.241645  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:35.241699  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:35.260422  687772 logs.go:282] 0 containers: []
	W1223 00:06:35.260476  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:35.260536  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:35.279168  687772 logs.go:282] 0 containers: []
	W1223 00:06:35.279192  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:35.279238  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:35.297208  687772 logs.go:282] 0 containers: []
	W1223 00:06:35.297236  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:35.297252  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:35.297267  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:35.317273  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:35.317299  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:35.374319  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:35.365790   20361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:35.367609   20361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:35.368076   20361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:35.369665   20361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:35.370105   20361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:35.365790   20361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:35.367609   20361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:35.368076   20361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:35.369665   20361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:35.370105   20361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:35.374337  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:35.374349  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:35.393025  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:35.393050  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:35.420499  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:35.420537  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:37.968117  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:37.979448  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:37.998789  687772 logs.go:282] 0 containers: []
	W1223 00:06:37.998815  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:37.998861  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:38.019815  687772 logs.go:282] 0 containers: []
	W1223 00:06:38.019847  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:38.019910  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:38.042524  687772 logs.go:282] 0 containers: []
	W1223 00:06:38.042552  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:38.042617  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:38.061464  687772 logs.go:282] 0 containers: []
	W1223 00:06:38.061489  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:38.061544  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:38.080482  687772 logs.go:282] 0 containers: []
	W1223 00:06:38.080509  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:38.080558  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:38.099189  687772 logs.go:282] 0 containers: []
	W1223 00:06:38.099215  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:38.099279  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:38.118161  687772 logs.go:282] 0 containers: []
	W1223 00:06:38.118188  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:38.118244  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:38.136752  687772 logs.go:282] 0 containers: []
	W1223 00:06:38.136786  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:38.136803  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:38.136819  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:38.182751  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:38.182779  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:38.202352  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:38.202375  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:38.257901  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:38.250382   20532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:38.251009   20532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:38.252656   20532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:38.253166   20532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:38.254694   20532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:38.250382   20532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:38.251009   20532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:38.252656   20532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:38.253166   20532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:38.254694   20532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:38.257922  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:38.257933  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:38.276963  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:38.276988  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:40.806792  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:40.818244  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:40.837324  687772 logs.go:282] 0 containers: []
	W1223 00:06:40.837348  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:40.837402  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:40.856364  687772 logs.go:282] 0 containers: []
	W1223 00:06:40.856387  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:40.856453  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:40.874753  687772 logs.go:282] 0 containers: []
	W1223 00:06:40.874780  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:40.874831  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:40.893167  687772 logs.go:282] 0 containers: []
	W1223 00:06:40.893193  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:40.893242  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:40.910901  687772 logs.go:282] 0 containers: []
	W1223 00:06:40.910924  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:40.910976  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:40.930108  687772 logs.go:282] 0 containers: []
	W1223 00:06:40.930133  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:40.930191  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:40.949021  687772 logs.go:282] 0 containers: []
	W1223 00:06:40.949047  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:40.949101  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:40.967221  687772 logs.go:282] 0 containers: []
	W1223 00:06:40.967246  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:40.967260  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:40.967276  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:40.988752  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:40.988779  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:41.048349  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:41.040501   20693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:41.041135   20693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:41.042880   20693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:41.043313   20693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:41.044930   20693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:41.040501   20693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:41.041135   20693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:41.042880   20693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:41.043313   20693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:41.044930   20693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:41.048374  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:41.048387  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:41.067112  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:41.067138  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:41.093421  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:41.093445  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:43.639363  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:43.653263  687772 out.go:203] 
	W1223 00:06:43.654345  687772 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1223 00:06:43.654374  687772 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1223 00:06:43.654383  687772 out.go:285] * Related issues:
	W1223 00:06:43.654397  687772 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1223 00:06:43.654411  687772 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1223 00:06:43.655505  687772 out.go:203] 
	
	
	==> Docker <==
	Dec 23 00:00:40 newest-cni-348344 dockerd[918]: time="2025-12-23T00:00:40.188726089Z" level=info msg="Restoring containers: start."
	Dec 23 00:00:40 newest-cni-348344 dockerd[918]: time="2025-12-23T00:00:40.201365877Z" level=info msg="Deleting nftables IPv4 rules" error="exit status 1"
	Dec 23 00:00:40 newest-cni-348344 dockerd[918]: time="2025-12-23T00:00:40.219292925Z" level=info msg="Deleting nftables IPv6 rules" error="exit status 1"
	Dec 23 00:00:40 newest-cni-348344 dockerd[918]: time="2025-12-23T00:00:40.739437960Z" level=info msg="Loading containers: done."
	Dec 23 00:00:40 newest-cni-348344 dockerd[918]: time="2025-12-23T00:00:40.749914456Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 23 00:00:40 newest-cni-348344 dockerd[918]: time="2025-12-23T00:00:40.749957470Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 23 00:00:40 newest-cni-348344 dockerd[918]: time="2025-12-23T00:00:40.750005698Z" level=info msg="Initializing buildkit"
	Dec 23 00:00:40 newest-cni-348344 dockerd[918]: time="2025-12-23T00:00:40.769087870Z" level=info msg="Completed buildkit initialization"
	Dec 23 00:00:40 newest-cni-348344 dockerd[918]: time="2025-12-23T00:00:40.777195260Z" level=info msg="Daemon has completed initialization"
	Dec 23 00:00:40 newest-cni-348344 dockerd[918]: time="2025-12-23T00:00:40.777258358Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 23 00:00:40 newest-cni-348344 dockerd[918]: time="2025-12-23T00:00:40.777420063Z" level=info msg="API listen on /run/docker.sock"
	Dec 23 00:00:40 newest-cni-348344 dockerd[918]: time="2025-12-23T00:00:40.777484333Z" level=info msg="API listen on [::]:2376"
	Dec 23 00:00:40 newest-cni-348344 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 23 00:00:41 newest-cni-348344 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 23 00:00:41 newest-cni-348344 cri-dockerd[1209]: time="2025-12-23T00:00:41Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 23 00:00:41 newest-cni-348344 cri-dockerd[1209]: time="2025-12-23T00:00:41Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 23 00:00:41 newest-cni-348344 cri-dockerd[1209]: time="2025-12-23T00:00:41Z" level=info msg="Start docker client with request timeout 0s"
	Dec 23 00:00:41 newest-cni-348344 cri-dockerd[1209]: time="2025-12-23T00:00:41Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 23 00:00:41 newest-cni-348344 cri-dockerd[1209]: time="2025-12-23T00:00:41Z" level=info msg="Loaded network plugin cni"
	Dec 23 00:00:41 newest-cni-348344 cri-dockerd[1209]: time="2025-12-23T00:00:41Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 23 00:00:41 newest-cni-348344 cri-dockerd[1209]: time="2025-12-23T00:00:41Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 23 00:00:41 newest-cni-348344 cri-dockerd[1209]: time="2025-12-23T00:00:41Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 23 00:00:41 newest-cni-348344 cri-dockerd[1209]: time="2025-12-23T00:00:41Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 23 00:00:41 newest-cni-348344 cri-dockerd[1209]: time="2025-12-23T00:00:41Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 23 00:00:41 newest-cni-348344 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:53.018485   21505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:53.019030   21505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:53.020741   21505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:53.021269   21505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:53.022923   21505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 32 44 b0 85 99 75 08 06
	[  +2.519484] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ca 64 f4 88 60 6a 08 06
	[  +0.000472] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 42 41 81 ba 80 a4 08 06
	[Dec22 23:59] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e 60 1e 9e f0 0c 08 06
	[  +0.088099] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff f6 12 57 26 ed f1 08 06
	[  +5.341024] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 46 24 97 27 5a ed 08 06
	[ +14.537406] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff da 72 df 3b 35 8d 08 06
	[  +0.000388] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 5e 60 1e 9e f0 0c 08 06
	[  +2.465032] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 84 3f 6a 28 22 08 06
	[  +0.000373] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 46 24 97 27 5a ed 08 06
	[Dec23 00:00] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 4e 53 f0 1e af dd 08 06
	[Dec23 00:01] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 20 71 68 66 a5 08 06
	[  +0.000346] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 4e 53 f0 1e af dd 08 06
	
	
	==> kernel <==
	 00:06:53 up  3:49,  0 user,  load average: 0.49, 1.43, 1.66
	Linux newest-cni-348344 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 23 00:06:50 newest-cni-348344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 23 00:06:50 newest-cni-348344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3.
	Dec 23 00:06:50 newest-cni-348344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 23 00:06:50 newest-cni-348344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 23 00:06:50 newest-cni-348344 kubelet[21320]: E1223 00:06:50.803276   21320 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 23 00:06:50 newest-cni-348344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 23 00:06:50 newest-cni-348344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 23 00:06:51 newest-cni-348344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4.
	Dec 23 00:06:51 newest-cni-348344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 23 00:06:51 newest-cni-348344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 23 00:06:51 newest-cni-348344 kubelet[21338]: E1223 00:06:51.538622   21338 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 23 00:06:51 newest-cni-348344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 23 00:06:51 newest-cni-348344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 23 00:06:52 newest-cni-348344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5.
	Dec 23 00:06:52 newest-cni-348344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 23 00:06:52 newest-cni-348344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 23 00:06:52 newest-cni-348344 kubelet[21372]: E1223 00:06:52.299348   21372 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 23 00:06:52 newest-cni-348344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 23 00:06:52 newest-cni-348344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 23 00:06:52 newest-cni-348344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6.
	Dec 23 00:06:52 newest-cni-348344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 23 00:06:52 newest-cni-348344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 23 00:06:53 newest-cni-348344 kubelet[21513]: E1223 00:06:53.039606   21513 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 23 00:06:53 newest-cni-348344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 23 00:06:53 newest-cni-348344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-348344 -n newest-cni-348344
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-348344 -n newest-cni-348344: exit status 2 (299.591152ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "newest-cni-348344" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (6.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (271.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:11:29.130989   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/kubenet-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:11:30.659987   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-580825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:11:32.331020   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/custom-flannel-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:11:53.430581   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/old-k8s-version-687073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:12:09.813763   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/auto-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:13:06.548092   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/enable-default-cni-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:13:09.060125   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/false-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:13:29.286275   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/kindnet-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:13:32.857123   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/auto-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:13:38.110139   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/skaffold-356784/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:14:06.387644   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/default-k8s-diff-port-700304/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:14:21.985648   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/flannel-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:14:23.975329   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/bridge-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:14:52.331439   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/kindnet-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:14:53.440973   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:15:10.134303   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/calico-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
E1223 00:15:31.034111   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/addons-268945/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.103.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.103.2:8443: connect: connection refused
start_stop_delete_test.go:285: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-063943 -n no-preload-063943
start_stop_delete_test.go:285: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-063943 -n no-preload-063943: exit status 2 (297.169531ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:285: status error: exit status 2 (may be ok)
start_stop_delete_test.go:285: "no-preload-063943" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-063943 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context no-preload-063943 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.578µs)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-063943 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-063943
helpers_test.go:244: (dbg) docker inspect no-preload-063943:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "786df4b777717287f11f0ef2eab8115dad6a21597d5995b3b84e35ed2328cebc",
	        "Created": "2025-12-22T23:45:49.557145486Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 622978,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-22T23:56:07.024549385Z",
	            "FinishedAt": "2025-12-22T23:56:05.577772514Z"
	        },
	        "Image": "sha256:9a87e850a5e640dd3e5f71477885272b970ba271e3722be8bebbe0157f704ffd",
	        "ResolvConfPath": "/var/lib/docker/containers/786df4b777717287f11f0ef2eab8115dad6a21597d5995b3b84e35ed2328cebc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/786df4b777717287f11f0ef2eab8115dad6a21597d5995b3b84e35ed2328cebc/hostname",
	        "HostsPath": "/var/lib/docker/containers/786df4b777717287f11f0ef2eab8115dad6a21597d5995b3b84e35ed2328cebc/hosts",
	        "LogPath": "/var/lib/docker/containers/786df4b777717287f11f0ef2eab8115dad6a21597d5995b3b84e35ed2328cebc/786df4b777717287f11f0ef2eab8115dad6a21597d5995b3b84e35ed2328cebc-json.log",
	        "Name": "/no-preload-063943",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-063943:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-063943",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "786df4b777717287f11f0ef2eab8115dad6a21597d5995b3b84e35ed2328cebc",
	                "LowerDir": "/var/lib/docker/overlay2/29902a9fc8792c76fa85dc5a0de0b07f3c2e185c6d971af2f6ebff298763d0a3-init/diff:/var/lib/docker/overlay2/c57dd1a41102d99c4ed6be3c60b871435428bd2cea6a3d8d172f0a67527ba009/diff",
	                "MergedDir": "/var/lib/docker/overlay2/29902a9fc8792c76fa85dc5a0de0b07f3c2e185c6d971af2f6ebff298763d0a3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/29902a9fc8792c76fa85dc5a0de0b07f3c2e185c6d971af2f6ebff298763d0a3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/29902a9fc8792c76fa85dc5a0de0b07f3c2e185c6d971af2f6ebff298763d0a3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-063943",
	                "Source": "/var/lib/docker/volumes/no-preload-063943/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-063943",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-063943",
	                "name.minikube.sigs.k8s.io": "no-preload-063943",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "e615544c2ed8dc279a0d7bd7031d234c4bd36d86ac886a8680dbb0ce786c6bb0",
	            "SandboxKey": "/var/run/docker/netns/e615544c2ed8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-063943": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6fe1a4d651e77a6056be2344adfa00e0a1474c8d315239814c9f2b4594dd53fd",
	                    "EndpointID": "77c3f5905f39c7f705fb61bcc99a23730dfec3ccc0be5afe97e05c39881c936c",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "b6:1b:7b:4d:bd:50",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-063943",
	                        "786df4b77771"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-063943 -n no-preload-063943
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-063943 -n no-preload-063943: exit status 2 (300.132461ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-063943 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-063943 logs -n 25: (1.167136309s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                      ARGS                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p kubenet-003676 sudo cat /var/lib/kubelet/config.yaml                         │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo systemctl status docker --all --full --no-pager          │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo systemctl cat docker --no-pager                          │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo cat /etc/docker/daemon.json                              │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo docker system info                                       │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo systemctl status cri-docker --all --full --no-pager      │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo systemctl cat cri-docker --no-pager                      │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo cat /usr/lib/systemd/system/cri-docker.service           │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo cri-dockerd --version                                    │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo systemctl status containerd --all --full --no-pager      │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo systemctl cat containerd --no-pager                      │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo cat /lib/systemd/system/containerd.service               │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo cat /etc/containerd/config.toml                          │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo containerd config dump                                   │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo systemctl status crio --all --full --no-pager            │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │                     │
	│ ssh     │ -p kubenet-003676 sudo systemctl cat crio --no-pager                            │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;  │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ ssh     │ -p kubenet-003676 sudo crio config                                              │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ delete  │ -p kubenet-003676                                                               │ kubenet-003676    │ jenkins │ v1.37.0 │ 23 Dec 25 00:01 UTC │ 23 Dec 25 00:01 UTC │
	│ image   │ newest-cni-348344 image list --format=json                                      │ newest-cni-348344 │ jenkins │ v1.37.0 │ 23 Dec 25 00:06 UTC │ 23 Dec 25 00:06 UTC │
	│ pause   │ -p newest-cni-348344 --alsologtostderr -v=1                                     │ newest-cni-348344 │ jenkins │ v1.37.0 │ 23 Dec 25 00:06 UTC │ 23 Dec 25 00:06 UTC │
	│ unpause │ -p newest-cni-348344 --alsologtostderr -v=1                                     │ newest-cni-348344 │ jenkins │ v1.37.0 │ 23 Dec 25 00:06 UTC │ 23 Dec 25 00:06 UTC │
	│ delete  │ -p newest-cni-348344                                                            │ newest-cni-348344 │ jenkins │ v1.37.0 │ 23 Dec 25 00:06 UTC │ 23 Dec 25 00:06 UTC │
	│ delete  │ -p newest-cni-348344                                                            │ newest-cni-348344 │ jenkins │ v1.37.0 │ 23 Dec 25 00:06 UTC │ 23 Dec 25 00:06 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/23 00:00:34
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1223 00:00:34.066824  687772 out.go:360] Setting OutFile to fd 1 ...
	I1223 00:00:34.067051  687772 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1223 00:00:34.067058  687772 out.go:374] Setting ErrFile to fd 2...
	I1223 00:00:34.067063  687772 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1223 00:00:34.067257  687772 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-72233/.minikube/bin
	I1223 00:00:34.067701  687772 out.go:368] Setting JSON to false
	I1223 00:00:34.068753  687772 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":13374,"bootTime":1766434660,"procs":281,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1223 00:00:34.068805  687772 start.go:143] virtualization: kvm guest
	W1223 00:00:29.964565  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:31.965119  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:33.965297  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	I1223 00:00:34.070524  687772 out.go:179] * [newest-cni-348344] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1223 00:00:34.072192  687772 notify.go:221] Checking for updates...
	I1223 00:00:34.072201  687772 out.go:179]   - MINIKUBE_LOCATION=22301
	I1223 00:00:34.073912  687772 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1223 00:00:34.074996  687772 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1223 00:00:34.076047  687772 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-72233/.minikube
	I1223 00:00:34.077175  687772 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1223 00:00:34.078295  687772 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1223 00:00:34.079882  687772 config.go:182] Loaded profile config "newest-cni-348344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1223 00:00:34.080446  687772 driver.go:422] Setting default libvirt URI to qemu:///system
	I1223 00:00:34.106101  687772 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1223 00:00:34.106213  687772 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1223 00:00:34.161275  687772 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:64 SystemTime:2025-12-23 00:00:34.151129133 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1223 00:00:34.161373  687772 docker.go:319] overlay module found
	I1223 00:00:34.163775  687772 out.go:179] * Using the docker driver based on existing profile
	I1223 00:00:34.164711  687772 start.go:309] selected driver: docker
	I1223 00:00:34.164723  687772 start.go:928] validating driver "docker" against &{Name:newest-cni-348344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-348344 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1223 00:00:34.164829  687772 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1223 00:00:34.165640  687772 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1223 00:00:34.231050  687772 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:64 SystemTime:2025-12-23 00:00:34.221418272 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1223 00:00:34.231362  687772 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1223 00:00:34.231388  687772 cni.go:84] Creating CNI manager for ""
	I1223 00:00:34.231454  687772 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1223 00:00:34.231489  687772 start.go:353] cluster config:
	{Name:newest-cni-348344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-348344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1223 00:00:34.233188  687772 out.go:179] * Starting "newest-cni-348344" primary control-plane node in "newest-cni-348344" cluster
	I1223 00:00:34.234320  687772 cache.go:134] Beginning downloading kic base image for docker with docker
	I1223 00:00:34.235503  687772 out.go:179] * Pulling base image v0.0.48-1766394456-22288 ...
	I1223 00:00:34.236442  687772 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1223 00:00:34.236471  687772 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4
	I1223 00:00:34.236486  687772 cache.go:65] Caching tarball of preloaded images
	I1223 00:00:34.236540  687772 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local docker daemon
	I1223 00:00:34.236575  687772 preload.go:251] Found /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1223 00:00:34.236586  687772 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on docker
	I1223 00:00:34.236749  687772 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/config.json ...
	I1223 00:00:34.256540  687772 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local docker daemon, skipping pull
	I1223 00:00:34.256557  687772 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 exists in daemon, skipping load
	I1223 00:00:34.256572  687772 cache.go:243] Successfully downloaded all kic artifacts
	I1223 00:00:34.256623  687772 start.go:360] acquireMachinesLock for newest-cni-348344: {Name:mk26cd248e0bcd2d8f2e8a824868ba7de6c9c6f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1223 00:00:34.256695  687772 start.go:364] duration metric: took 39.918µs to acquireMachinesLock for "newest-cni-348344"
	I1223 00:00:34.256714  687772 start.go:96] Skipping create...Using existing machine configuration
	I1223 00:00:34.256719  687772 fix.go:54] fixHost starting: 
	I1223 00:00:34.256918  687772 cli_runner.go:164] Run: docker container inspect newest-cni-348344 --format={{.State.Status}}
	I1223 00:00:34.273955  687772 fix.go:112] recreateIfNeeded on newest-cni-348344: state=Stopped err=<nil>
	W1223 00:00:34.273978  687772 fix.go:138] unexpected machine state, will restart: <nil>
	W1223 00:00:31.998314  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:34.497435  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:36.498048  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:00:34.276035  687772 out.go:252] * Restarting existing docker container for "newest-cni-348344" ...
	I1223 00:00:34.276100  687772 cli_runner.go:164] Run: docker start newest-cni-348344
	I1223 00:00:34.520995  687772 cli_runner.go:164] Run: docker container inspect newest-cni-348344 --format={{.State.Status}}
	I1223 00:00:34.540249  687772 kic.go:430] container "newest-cni-348344" state is running.
	I1223 00:00:34.540736  687772 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-348344
	I1223 00:00:34.560319  687772 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/config.json ...
	I1223 00:00:34.560718  687772 machine.go:94] provisionDockerMachine start ...
	I1223 00:00:34.560825  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:34.581907  687772 main.go:144] libmachine: Using SSH client type: native
	I1223 00:00:34.582194  687772 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33168 <nil> <nil>}
	I1223 00:00:34.582211  687772 main.go:144] libmachine: About to run SSH command:
	hostname
	I1223 00:00:34.583095  687772 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41172->127.0.0.1:33168: read: connection reset by peer
	I1223 00:00:37.726578  687772 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-348344
	
	I1223 00:00:37.726621  687772 ubuntu.go:182] provisioning hostname "newest-cni-348344"
	I1223 00:00:37.726764  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:37.746947  687772 main.go:144] libmachine: Using SSH client type: native
	I1223 00:00:37.747183  687772 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33168 <nil> <nil>}
	I1223 00:00:37.747203  687772 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-348344 && echo "newest-cni-348344" | sudo tee /etc/hostname
	I1223 00:00:37.900818  687772 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-348344
	
	I1223 00:00:37.900900  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:37.919317  687772 main.go:144] libmachine: Using SSH client type: native
	I1223 00:00:37.919561  687772 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33168 <nil> <nil>}
	I1223 00:00:37.919579  687772 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-348344' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-348344/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-348344' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1223 00:00:38.062239  687772 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1223 00:00:38.062284  687772 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22301-72233/.minikube CaCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22301-72233/.minikube}
	I1223 00:00:38.062331  687772 ubuntu.go:190] setting up certificates
	I1223 00:00:38.062344  687772 provision.go:84] configureAuth start
	I1223 00:00:38.062400  687772 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-348344
	I1223 00:00:38.081263  687772 provision.go:143] copyHostCerts
	I1223 00:00:38.081355  687772 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem, removing ...
	I1223 00:00:38.081386  687772 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem
	I1223 00:00:38.081497  687772 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/ca.pem (1082 bytes)
	I1223 00:00:38.081760  687772 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem, removing ...
	I1223 00:00:38.081785  687772 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem
	I1223 00:00:38.081851  687772 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/cert.pem (1123 bytes)
	I1223 00:00:38.082007  687772 exec_runner.go:144] found /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem, removing ...
	I1223 00:00:38.082027  687772 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem
	I1223 00:00:38.082101  687772 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22301-72233/.minikube/key.pem (1679 bytes)
	I1223 00:00:38.082238  687772 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem org=jenkins.newest-cni-348344 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-348344]
	I1223 00:00:38.170695  687772 provision.go:177] copyRemoteCerts
	I1223 00:00:38.170759  687772 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1223 00:00:38.170815  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:38.189123  687772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/newest-cni-348344/id_rsa Username:docker}
	I1223 00:00:38.291920  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1223 00:00:38.309166  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1223 00:00:38.326166  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1223 00:00:38.344564  687772 provision.go:87] duration metric: took 282.199681ms to configureAuth
	I1223 00:00:38.344627  687772 ubuntu.go:206] setting minikube options for container-runtime
	I1223 00:00:38.344911  687772 config.go:182] Loaded profile config "newest-cni-348344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1223 00:00:38.344995  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:38.366260  687772 main.go:144] libmachine: Using SSH client type: native
	I1223 00:00:38.366529  687772 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33168 <nil> <nil>}
	I1223 00:00:38.366545  687772 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1223 00:00:38.510728  687772 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1223 00:00:38.510754  687772 ubuntu.go:71] root file system type: overlay
	I1223 00:00:38.510908  687772 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1223 00:00:38.510979  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:38.532018  687772 main.go:144] libmachine: Using SSH client type: native
	I1223 00:00:38.532329  687772 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33168 <nil> <nil>}
	I1223 00:00:38.532458  687772 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1223 00:00:38.686369  687772 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1223 00:00:38.686443  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:38.704974  687772 main.go:144] libmachine: Using SSH client type: native
	I1223 00:00:38.705187  687772 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84da00] 0x8506a0 <nil>  [] 0s} 127.0.0.1 33168 <nil> <nil>}
	I1223 00:00:38.705204  687772 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1223 00:00:38.853249  687772 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1223 00:00:38.853285  687772 machine.go:97] duration metric: took 4.292539002s to provisionDockerMachine
	I1223 00:00:38.853303  687772 start.go:293] postStartSetup for "newest-cni-348344" (driver="docker")
	I1223 00:00:38.853321  687772 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1223 00:00:38.853419  687772 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1223 00:00:38.853487  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:38.872421  687772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/newest-cni-348344/id_rsa Username:docker}
	I1223 00:00:38.979494  687772 ssh_runner.go:195] Run: cat /etc/os-release
	I1223 00:00:38.983309  687772 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1223 00:00:38.983340  687772 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1223 00:00:38.983352  687772 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-72233/.minikube/addons for local assets ...
	I1223 00:00:38.983401  687772 filesync.go:126] Scanning /home/jenkins/minikube-integration/22301-72233/.minikube/files for local assets ...
	I1223 00:00:38.983478  687772 filesync.go:149] local asset: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem -> 758032.pem in /etc/ssl/certs
	I1223 00:00:38.983566  687772 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1223 00:00:38.991328  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem --> /etc/ssl/certs/758032.pem (1708 bytes)
	I1223 00:00:39.009038  687772 start.go:296] duration metric: took 155.718049ms for postStartSetup
	I1223 00:00:39.009116  687772 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1223 00:00:39.009156  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:39.028095  687772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/newest-cni-348344/id_rsa Username:docker}
	W1223 00:00:36.464751  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:38.465798  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	I1223 00:00:39.126878  687772 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1223 00:00:39.131527  687772 fix.go:56] duration metric: took 4.874800463s for fixHost
	I1223 00:00:39.131555  687772 start.go:83] releasing machines lock for "newest-cni-348344", held for 4.874847678s
	I1223 00:00:39.131653  687772 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-348344
	I1223 00:00:39.149572  687772 ssh_runner.go:195] Run: cat /version.json
	I1223 00:00:39.149644  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:39.149684  687772 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1223 00:00:39.149791  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:39.169897  687772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/newest-cni-348344/id_rsa Username:docker}
	I1223 00:00:39.170262  687772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/newest-cni-348344/id_rsa Username:docker}
	I1223 00:00:39.329086  687772 ssh_runner.go:195] Run: systemctl --version
	I1223 00:00:39.336405  687772 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1223 00:00:39.341291  687772 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1223 00:00:39.341348  687772 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1223 00:00:39.349279  687772 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1223 00:00:39.349306  687772 start.go:496] detecting cgroup driver to use...
	I1223 00:00:39.349351  687772 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1223 00:00:39.349509  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1223 00:00:39.363512  687772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1223 00:00:39.372815  687772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1223 00:00:39.381448  687772 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1223 00:00:39.381508  687772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1223 00:00:39.390109  687772 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1223 00:00:39.399162  687772 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1223 00:00:39.408060  687772 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1223 00:00:39.416584  687772 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1223 00:00:39.424711  687772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1223 00:00:39.433432  687772 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1223 00:00:39.442056  687772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1223 00:00:39.450905  687772 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1223 00:00:39.458512  687772 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1223 00:00:39.466991  687772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1223 00:00:39.550442  687772 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1223 00:00:39.632902  687772 start.go:496] detecting cgroup driver to use...
	I1223 00:00:39.632954  687772 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1223 00:00:39.633002  687772 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1223 00:00:39.646974  687772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1223 00:00:39.659240  687772 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1223 00:00:39.675979  687772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1223 00:00:39.688406  687772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1223 00:00:39.700912  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1223 00:00:39.715235  687772 ssh_runner.go:195] Run: which cri-dockerd
	I1223 00:00:39.718945  687772 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1223 00:00:39.726972  687772 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1223 00:00:39.739559  687772 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1223 00:00:39.825824  687772 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1223 00:00:39.909620  687772 docker.go:578] configuring docker to use "cgroupfs" as cgroup driver...
	I1223 00:00:39.909743  687772 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1223 00:00:39.923453  687772 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1223 00:00:39.935401  687772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1223 00:00:40.031388  687772 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1223 00:00:40.779801  687772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1223 00:00:40.794700  687772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1223 00:00:40.807727  687772 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1223 00:00:40.821730  687772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1223 00:00:40.835038  687772 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1223 00:00:40.917554  687772 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1223 00:00:41.006720  687772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1223 00:00:41.090943  687772 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1223 00:00:41.119047  687772 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1223 00:00:41.131277  687772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1223 00:00:41.215437  687772 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1223 00:00:41.283622  687772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1223 00:00:41.298485  687772 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1223 00:00:41.298551  687772 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1223 00:00:41.302554  687772 start.go:564] Will wait 60s for crictl version
	I1223 00:00:41.302645  687772 ssh_runner.go:195] Run: which crictl
	I1223 00:00:41.306307  687772 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1223 00:00:41.332425  687772 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1223 00:00:41.332495  687772 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1223 00:00:41.358777  687772 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1223 00:00:41.385668  687772 out.go:252] * Preparing Kubernetes v1.35.0-rc.1 on Docker 29.1.3 ...
	I1223 00:00:41.385749  687772 cli_runner.go:164] Run: docker network inspect newest-cni-348344 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1223 00:00:41.403696  687772 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1223 00:00:41.407961  687772 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1223 00:00:41.419714  687772 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1223 00:00:38.997749  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:40.998211  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:00:41.420647  687772 kubeadm.go:884] updating cluster {Name:newest-cni-348344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-348344 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1223 00:00:41.420848  687772 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1223 00:00:41.420937  687772 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1223 00:00:41.444049  687772 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1223 00:00:41.444075  687772 docker.go:624] Images already preloaded, skipping extraction
	I1223 00:00:41.444128  687772 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1223 00:00:41.466181  687772 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1223 00:00:41.466206  687772 cache_images.go:86] Images are preloaded, skipping loading
	I1223 00:00:41.466215  687772 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0-rc.1 docker true true} ...
	I1223 00:00:41.466312  687772 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-348344 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-348344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1223 00:00:41.466372  687772 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1223 00:00:41.520065  687772 cni.go:84] Creating CNI manager for ""
	I1223 00:00:41.520097  687772 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1223 00:00:41.520116  687772 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1223 00:00:41.520151  687772 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-348344 NodeName:newest-cni-348344 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1223 00:00:41.520280  687772 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-348344"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1223 00:00:41.520348  687772 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1223 00:00:41.529909  687772 binaries.go:51] Found k8s binaries, skipping transfer
	I1223 00:00:41.529987  687772 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1223 00:00:41.538670  687772 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1223 00:00:41.552566  687772 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1223 00:00:41.565766  687772 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1223 00:00:41.578604  687772 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1223 00:00:41.582388  687772 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1223 00:00:41.592239  687772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1223 00:00:41.689716  687772 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1223 00:00:41.716265  687772 certs.go:69] Setting up /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344 for IP: 192.168.94.2
	I1223 00:00:41.716293  687772 certs.go:195] generating shared ca certs ...
	I1223 00:00:41.716315  687772 certs.go:227] acquiring lock for ca certs: {Name:mk952cc8302daab7c0050aedd5db4002f6808128 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1223 00:00:41.716492  687772 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.key
	I1223 00:00:41.716548  687772 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.key
	I1223 00:00:41.716557  687772 certs.go:257] generating profile certs ...
	I1223 00:00:41.716731  687772 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/client.key
	I1223 00:00:41.716814  687772 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/apiserver.key.3654ac73
	I1223 00:00:41.716864  687772 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/proxy-client.key
	I1223 00:00:41.716992  687772 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803.pem (1338 bytes)
	W1223 00:00:41.717032  687772 certs.go:480] ignoring /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803_empty.pem, impossibly tiny 0 bytes
	I1223 00:00:41.717041  687772 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca-key.pem (1675 bytes)
	I1223 00:00:41.717076  687772 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/ca.pem (1082 bytes)
	I1223 00:00:41.717110  687772 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/cert.pem (1123 bytes)
	I1223 00:00:41.717142  687772 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/certs/key.pem (1679 bytes)
	I1223 00:00:41.717206  687772 certs.go:484] found cert: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem (1708 bytes)
	I1223 00:00:41.718210  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1223 00:00:41.739858  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1223 00:00:41.759304  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1223 00:00:41.777433  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1223 00:00:41.794878  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1223 00:00:41.811695  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1223 00:00:41.829786  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1223 00:00:41.846666  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/newest-cni-348344/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1223 00:00:41.863546  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/ssl/certs/758032.pem --> /usr/share/ca-certificates/758032.pem (1708 bytes)
	I1223 00:00:41.880762  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1223 00:00:41.898275  687772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22301-72233/.minikube/certs/75803.pem --> /usr/share/ca-certificates/75803.pem (1338 bytes)
	I1223 00:00:41.915259  687772 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1223 00:00:41.927541  687772 ssh_runner.go:195] Run: openssl version
	I1223 00:00:41.933577  687772 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1223 00:00:41.940758  687772 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1223 00:00:41.948346  687772 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1223 00:00:41.952096  687772 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 22:33 /usr/share/ca-certificates/minikubeCA.pem
	I1223 00:00:41.952140  687772 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1223 00:00:41.987400  687772 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1223 00:00:41.995158  687772 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/75803.pem
	I1223 00:00:42.002657  687772 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/75803.pem /etc/ssl/certs/75803.pem
	I1223 00:00:42.010346  687772 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75803.pem
	I1223 00:00:42.014050  687772 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 22:42 /usr/share/ca-certificates/75803.pem
	I1223 00:00:42.014097  687772 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75803.pem
	I1223 00:00:42.048067  687772 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1223 00:00:42.055784  687772 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/758032.pem
	I1223 00:00:42.063393  687772 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/758032.pem /etc/ssl/certs/758032.pem
	I1223 00:00:42.071081  687772 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/758032.pem
	I1223 00:00:42.075446  687772 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 22:42 /usr/share/ca-certificates/758032.pem
	I1223 00:00:42.075515  687772 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/758032.pem
	I1223 00:00:42.110438  687772 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1223 00:00:42.118365  687772 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1223 00:00:42.122225  687772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1223 00:00:42.157294  687772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1223 00:00:42.192810  687772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1223 00:00:42.226497  687772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1223 00:00:42.261017  687772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1223 00:00:42.299910  687772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1223 00:00:42.335335  687772 kubeadm.go:401] StartCluster: {Name:newest-cni-348344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-348344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1223 00:00:42.335492  687772 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1223 00:00:42.355576  687772 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1223 00:00:42.364792  687772 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1223 00:00:42.364813  687772 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1223 00:00:42.364868  687772 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1223 00:00:42.373141  687772 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1223 00:00:42.374088  687772 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-348344" does not appear in /home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1223 00:00:42.374674  687772 kubeconfig.go:62] /home/jenkins/minikube-integration/22301-72233/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-348344" cluster setting kubeconfig missing "newest-cni-348344" context setting]
	I1223 00:00:42.375665  687772 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/kubeconfig: {Name:mkabb5ea92c3fe748f610038efb5c58128364c71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1223 00:00:42.377667  687772 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1223 00:00:42.385784  687772 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1223 00:00:42.385817  687772 kubeadm.go:602] duration metric: took 20.99658ms to restartPrimaryControlPlane
	I1223 00:00:42.385829  687772 kubeadm.go:403] duration metric: took 50.508354ms to StartCluster
	I1223 00:00:42.385848  687772 settings.go:142] acquiring lock: {Name:mk05aa406dacdbba79fec0b7e7f355491ea46bf8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1223 00:00:42.385918  687772 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1223 00:00:42.387577  687772 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/kubeconfig: {Name:mkabb5ea92c3fe748f610038efb5c58128364c71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1223 00:00:42.387909  687772 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1223 00:00:42.388038  687772 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1223 00:00:42.388131  687772 config.go:182] Loaded profile config "newest-cni-348344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1223 00:00:42.388136  687772 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-348344"
	I1223 00:00:42.388154  687772 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-348344"
	I1223 00:00:42.388170  687772 addons.go:70] Setting dashboard=true in profile "newest-cni-348344"
	I1223 00:00:42.388189  687772 addons.go:70] Setting default-storageclass=true in profile "newest-cni-348344"
	I1223 00:00:42.388208  687772 addons.go:239] Setting addon dashboard=true in "newest-cni-348344"
	W1223 00:00:42.388222  687772 addons.go:248] addon dashboard should already be in state true
	I1223 00:00:42.388226  687772 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-348344"
	I1223 00:00:42.388262  687772 host.go:66] Checking if "newest-cni-348344" exists ...
	I1223 00:00:42.388184  687772 host.go:66] Checking if "newest-cni-348344" exists ...
	I1223 00:00:42.388611  687772 cli_runner.go:164] Run: docker container inspect newest-cni-348344 --format={{.State.Status}}
	I1223 00:00:42.388764  687772 cli_runner.go:164] Run: docker container inspect newest-cni-348344 --format={{.State.Status}}
	I1223 00:00:42.388771  687772 cli_runner.go:164] Run: docker container inspect newest-cni-348344 --format={{.State.Status}}
	I1223 00:00:42.390679  687772 out.go:179] * Verifying Kubernetes components...
	I1223 00:00:42.391709  687772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1223 00:00:42.411960  687772 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1223 00:00:42.411964  687772 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1223 00:00:42.412088  687772 addons.go:239] Setting addon default-storageclass=true in "newest-cni-348344"
	I1223 00:00:42.412122  687772 host.go:66] Checking if "newest-cni-348344" exists ...
	I1223 00:00:42.412456  687772 cli_runner.go:164] Run: docker container inspect newest-cni-348344 --format={{.State.Status}}
	I1223 00:00:42.413023  687772 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1223 00:00:42.413043  687772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1223 00:00:42.413094  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:42.416720  687772 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1223 00:00:42.418014  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1223 00:00:42.418038  687772 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1223 00:00:42.418101  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:42.435878  687772 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1223 00:00:42.435898  687772 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1223 00:00:42.435951  687772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-348344
	I1223 00:00:42.437317  687772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/newest-cni-348344/id_rsa Username:docker}
	I1223 00:00:42.440138  687772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/newest-cni-348344/id_rsa Username:docker}
	I1223 00:00:42.468807  687772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/newest-cni-348344/id_rsa Username:docker}
	I1223 00:00:42.547260  687772 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1223 00:00:42.600477  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1223 00:00:42.600501  687772 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1223 00:00:42.601363  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1223 00:00:42.607651  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1223 00:00:42.614184  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1223 00:00:42.614204  687772 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1223 00:00:42.628125  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1223 00:00:42.628154  687772 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1223 00:00:42.643093  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1223 00:00:42.643121  687772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1223 00:00:42.702024  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1223 00:00:42.702052  687772 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1223 00:00:42.716211  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1223 00:00:42.716238  687772 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1223 00:00:42.729547  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1223 00:00:42.729569  687772 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1223 00:00:42.742218  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1223 00:00:42.742246  687772 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1223 00:00:42.754716  687772 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1223 00:00:42.754741  687772 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1223 00:00:42.767403  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1223 00:00:43.251127  687772 api_server.go:52] waiting for apiserver process to appear ...
	W1223 00:00:43.251190  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1223 00:00:43.251231  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:43.251243  687772 retry.go:84] will retry after 100ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:43.251206  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:00:43.251536  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:43.392258  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:00:43.445582  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:43.478824  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1223 00:00:43.509277  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:00:43.537617  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1223 00:00:43.565023  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:43.661224  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:00:43.715310  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:43.751478  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:44.004029  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:00:44.062136  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1223 00:00:40.965708  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:43.465160  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:43.498161  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:45.997960  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:00:44.088452  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:00:44.143262  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:44.251335  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:44.383985  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1223 00:00:44.396693  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:00:44.439768  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1223 00:00:44.455552  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:44.752205  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:44.902917  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:00:44.962468  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:45.251805  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:45.326701  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:00:45.400433  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:45.424647  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:00:45.480956  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:45.751310  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:45.799387  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:00:45.854148  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:46.251658  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:46.752208  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:46.836006  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:00:46.890186  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:46.995326  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:00:47.057423  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:47.251702  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:47.426474  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:00:47.485952  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:47.752131  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:48.207567  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1223 00:00:48.251315  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:00:48.262862  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:48.519836  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:00:48.578806  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:48.752112  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:00:45.465342  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:47.465739  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:47.998170  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:50.498122  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:00:49.251876  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:49.751844  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:49.890160  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:00:49.947020  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:50.066047  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:00:50.120129  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:50.251336  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:50.558241  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:00:50.615271  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:50.751572  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:51.252351  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:51.751913  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:52.251786  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:52.752327  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:52.791985  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:00:52.850108  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:53.251446  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:53.751646  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:00:49.964544  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:51.964692  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:53.964842  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:52.997659  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:54.998340  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:00:54.252244  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:54.751444  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:55.251848  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:55.347206  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:00:55.401615  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:55.751830  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:55.936027  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:00:55.995088  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:56.251432  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:56.751824  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:57.111686  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:00:57.169648  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:00:57.251735  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:57.751676  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:58.251279  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:58.751377  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:00:55.965091  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	W1223 00:00:58.465307  679852 pod_ready.go:104] pod "coredns-66bc5c9577-v4sr7" is not "Ready", error: <nil>
	I1223 00:00:59.465067  679852 pod_ready.go:94] pod "coredns-66bc5c9577-v4sr7" is "Ready"
	I1223 00:00:59.465093  679852 pod_ready.go:86] duration metric: took 31.505726579s for pod "coredns-66bc5c9577-v4sr7" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:00:59.467499  679852 pod_ready.go:83] waiting for pod "etcd-kubenet-003676" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:00:59.471040  679852 pod_ready.go:94] pod "etcd-kubenet-003676" is "Ready"
	I1223 00:00:59.471063  679852 pod_ready.go:86] duration metric: took 3.544638ms for pod "etcd-kubenet-003676" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:00:59.472907  679852 pod_ready.go:83] waiting for pod "kube-apiserver-kubenet-003676" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:00:59.476385  679852 pod_ready.go:94] pod "kube-apiserver-kubenet-003676" is "Ready"
	I1223 00:00:59.476406  679852 pod_ready.go:86] duration metric: took 3.481083ms for pod "kube-apiserver-kubenet-003676" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:00:59.478385  679852 pod_ready.go:83] waiting for pod "kube-controller-manager-kubenet-003676" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:00:59.663149  679852 pod_ready.go:94] pod "kube-controller-manager-kubenet-003676" is "Ready"
	I1223 00:00:59.663178  679852 pod_ready.go:86] duration metric: took 184.769862ms for pod "kube-controller-manager-kubenet-003676" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:00:59.863586  679852 pod_ready.go:83] waiting for pod "kube-proxy-4ftjm" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:01:00.263634  679852 pod_ready.go:94] pod "kube-proxy-4ftjm" is "Ready"
	I1223 00:01:00.263661  679852 pod_ready.go:86] duration metric: took 400.030267ms for pod "kube-proxy-4ftjm" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:01:00.464316  679852 pod_ready.go:83] waiting for pod "kube-scheduler-kubenet-003676" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:01:00.863672  679852 pod_ready.go:94] pod "kube-scheduler-kubenet-003676" is "Ready"
	I1223 00:01:00.863704  679852 pod_ready.go:86] duration metric: took 399.359894ms for pod "kube-scheduler-kubenet-003676" in "kube-system" namespace to be "Ready" or be gone ...
	I1223 00:01:00.863716  679852 pod_ready.go:40] duration metric: took 32.907880274s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1223 00:01:00.909769  679852 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1223 00:01:00.911549  679852 out.go:179] * Done! kubectl is now configured to use "kubenet-003676" cluster and "default" namespace by default
	W1223 00:00:57.497653  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:00:59.497943  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:00:59.251660  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:00:59.751533  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:00.252079  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:00.347519  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:01:00.405959  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:00.406017  687772 retry.go:84] will retry after 5.4s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:00.564176  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:01:00.620480  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:00.751684  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:01.251318  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:01.751824  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:02.252159  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:02.751813  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:03.251679  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:03.752247  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:01:01.997413  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:03.998103  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:06.498109  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:04.252308  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:04.751451  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:05.252117  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:05.751808  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:05.759328  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:01:05.824086  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:05.824133  687772 retry.go:84] will retry after 18.8s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:06.105622  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:01:06.162303  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:06.251478  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:06.751336  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:07.251717  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:07.751827  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:08.251532  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:08.751779  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:01:08.998157  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:11.498214  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:09.251540  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:09.503324  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:01:09.562697  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:09.562742  687772 retry.go:84] will retry after 15.6s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:09.751986  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:10.251441  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:10.751356  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:11.251389  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:11.752279  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:12.251802  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:12.751324  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:13.251818  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:13.751866  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:01:13.998099  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:16.497424  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:14.252139  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:14.751583  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:15.252310  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:15.751625  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:16.251842  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:16.751830  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:17.252084  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:17.751783  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:18.251376  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:18.751964  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:01:18.498157  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:20.998023  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:19.252110  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:19.751822  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:20.251704  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:20.568697  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:01:20.639449  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:20.639500  687772 retry.go:84] will retry after 12s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:20.751734  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:21.252249  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:21.751801  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:22.251412  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:22.751366  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:23.252197  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:23.751273  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:01:23.498226  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:25.998233  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:24.252133  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:24.590346  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:01:24.662633  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:24.662674  687772 retry.go:84] will retry after 26.5s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:24.751773  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:25.211650  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1223 00:01:25.252110  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:01:25.284581  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:25.752154  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:26.251728  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:26.751953  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:27.251838  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:27.751740  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:28.251778  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:28.752182  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:01:28.498149  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:30.998068  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:29.252321  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:29.752290  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:30.251523  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:30.752205  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:31.251957  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:31.751675  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:32.251501  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:32.610650  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:01:32.668272  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:32.668321  687772 retry.go:84] will retry after 25.8s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:32.751294  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:33.251566  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:33.751514  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:01:33.497906  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:35.997708  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:34.251717  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:34.751347  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:35.251743  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:35.751789  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:36.251397  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:36.751367  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:37.251392  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:37.751873  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:38.251848  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:38.751798  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:01:38.498282  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:40.998196  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:39.251316  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:39.752288  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:40.251339  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:40.751372  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:41.252071  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:41.752308  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:42.252071  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:42.752021  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:01:42.776761  687772 logs.go:282] 0 containers: []
	W1223 00:01:42.776791  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:01:42.776839  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:01:42.797079  687772 logs.go:282] 0 containers: []
	W1223 00:01:42.797110  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:01:42.797168  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:01:42.817812  687772 logs.go:282] 0 containers: []
	W1223 00:01:42.817839  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:01:42.817907  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:01:42.837835  687772 logs.go:282] 0 containers: []
	W1223 00:01:42.837868  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:01:42.837924  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:01:42.858165  687772 logs.go:282] 0 containers: []
	W1223 00:01:42.858188  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:01:42.858231  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:01:42.878211  687772 logs.go:282] 0 containers: []
	W1223 00:01:42.878236  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:01:42.878281  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:01:42.897739  687772 logs.go:282] 0 containers: []
	W1223 00:01:42.897762  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:01:42.897817  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:01:42.918314  687772 logs.go:282] 0 containers: []
	W1223 00:01:42.918340  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:01:42.918352  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:01:42.918362  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:01:42.965734  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:01:42.965766  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:01:42.986555  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:01:42.986585  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:01:43.052069  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:01:43.042198    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:43.042815    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:43.044868    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:43.045810    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:43.047451    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:01:43.042198    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:43.042815    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:43.044868    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:43.045810    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:43.047451    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:01:43.052092  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:01:43.052108  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:01:43.072511  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:01:43.072542  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1223 00:01:43.497482  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:45.497538  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:45.620354  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:45.631856  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:01:45.651448  687772 logs.go:282] 0 containers: []
	W1223 00:01:45.651473  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:01:45.651528  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:01:45.671204  687772 logs.go:282] 0 containers: []
	W1223 00:01:45.671229  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:01:45.671279  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:01:45.690397  687772 logs.go:282] 0 containers: []
	W1223 00:01:45.690418  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:01:45.690470  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:01:45.711316  687772 logs.go:282] 0 containers: []
	W1223 00:01:45.711342  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:01:45.711400  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:01:45.731731  687772 logs.go:282] 0 containers: []
	W1223 00:01:45.731755  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:01:45.731812  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:01:45.751415  687772 logs.go:282] 0 containers: []
	W1223 00:01:45.751442  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:01:45.751500  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:01:45.770135  687772 logs.go:282] 0 containers: []
	W1223 00:01:45.770157  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:01:45.770202  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:01:45.789088  687772 logs.go:282] 0 containers: []
	W1223 00:01:45.789114  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:01:45.789127  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:01:45.789139  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:01:45.819714  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:01:45.819741  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:01:45.867983  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:01:45.868013  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:01:45.888018  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:01:45.888051  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:01:45.945247  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:01:45.937257    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:45.937924    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:45.939479    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:45.940165    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:45.941893    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:01:45.937257    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:45.937924    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:45.939479    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:45.940165    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:45.941893    3392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:01:45.945270  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:01:45.945286  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:01:47.390289  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:01:47.445750  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:47.445788  687772 retry.go:84] will retry after 18.6s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:48.464795  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:48.476515  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:01:48.499002  687772 logs.go:282] 0 containers: []
	W1223 00:01:48.499028  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:01:48.499082  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:01:48.523130  687772 logs.go:282] 0 containers: []
	W1223 00:01:48.523165  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:01:48.523225  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:01:48.547882  687772 logs.go:282] 0 containers: []
	W1223 00:01:48.547904  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:01:48.547950  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:01:48.567534  687772 logs.go:282] 0 containers: []
	W1223 00:01:48.567560  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:01:48.567627  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:01:48.590197  687772 logs.go:282] 0 containers: []
	W1223 00:01:48.590222  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:01:48.590280  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:01:48.609279  687772 logs.go:282] 0 containers: []
	W1223 00:01:48.609306  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:01:48.609357  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:01:48.628700  687772 logs.go:282] 0 containers: []
	W1223 00:01:48.628731  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:01:48.628795  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:01:48.648378  687772 logs.go:282] 0 containers: []
	W1223 00:01:48.648402  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:01:48.648416  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:01:48.648433  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:01:48.672700  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:01:48.672741  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:01:48.743864  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:01:48.735937    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:48.736663    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:48.737624    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:48.739145    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:48.739683    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:01:48.735937    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:48.736663    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:48.737624    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:48.739145    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:48.739683    3554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:01:48.743885  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:01:48.743897  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:01:48.762911  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:01:48.762941  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:01:48.793205  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:01:48.793235  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1223 00:01:47.498181  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:49.998144  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:51.128346  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:01:51.182480  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:51.182522  687772 retry.go:84] will retry after 29.5s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:51.341781  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:51.353222  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:01:51.373421  687772 logs.go:282] 0 containers: []
	W1223 00:01:51.373447  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:01:51.373497  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:01:51.392557  687772 logs.go:282] 0 containers: []
	W1223 00:01:51.392613  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:01:51.392675  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:01:51.411939  687772 logs.go:282] 0 containers: []
	W1223 00:01:51.411964  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:01:51.412018  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:01:51.430968  687772 logs.go:282] 0 containers: []
	W1223 00:01:51.430998  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:01:51.431054  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:01:51.451234  687772 logs.go:282] 0 containers: []
	W1223 00:01:51.451266  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:01:51.451329  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:01:51.470904  687772 logs.go:282] 0 containers: []
	W1223 00:01:51.470947  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:01:51.471017  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:01:51.490121  687772 logs.go:282] 0 containers: []
	W1223 00:01:51.490146  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:01:51.490201  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:01:51.511843  687772 logs.go:282] 0 containers: []
	W1223 00:01:51.511875  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:01:51.511891  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:01:51.511906  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:01:51.545106  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:01:51.545138  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:01:51.594541  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:01:51.594576  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:01:51.615455  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:01:51.615485  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:01:51.680061  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:01:51.672760    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:51.673328    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:51.674906    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:51.675511    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:51.677002    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:01:51.672760    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:51.673328    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:51.674906    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:51.675511    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:51.677002    3749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:01:51.680080  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:01:51.680091  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1223 00:01:52.497841  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:54.498062  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:01:56.498119  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:54.199432  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:54.211004  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:01:54.230469  687772 logs.go:282] 0 containers: []
	W1223 00:01:54.230498  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:01:54.230548  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:01:54.251212  687772 logs.go:282] 0 containers: []
	W1223 00:01:54.251245  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:01:54.251300  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:01:54.274147  687772 logs.go:282] 0 containers: []
	W1223 00:01:54.274177  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:01:54.274238  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:01:54.297381  687772 logs.go:282] 0 containers: []
	W1223 00:01:54.297413  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:01:54.297471  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:01:54.316290  687772 logs.go:282] 0 containers: []
	W1223 00:01:54.316315  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:01:54.316362  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:01:54.335315  687772 logs.go:282] 0 containers: []
	W1223 00:01:54.335339  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:01:54.335393  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:01:54.354058  687772 logs.go:282] 0 containers: []
	W1223 00:01:54.354089  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:01:54.354144  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:01:54.372661  687772 logs.go:282] 0 containers: []
	W1223 00:01:54.372686  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:01:54.372700  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:01:54.372715  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:01:54.417565  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:01:54.417601  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:01:54.438013  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:01:54.438041  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:01:54.497552  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:01:54.488781    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:54.489417    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:54.491051    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:54.491532    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:54.493218    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:01:54.488781    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:54.489417    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:54.491051    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:54.491532    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:54.493218    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:01:54.497575  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:01:54.497589  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:01:54.517495  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:01:54.517523  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:01:57.056037  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:57.067478  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:01:57.087084  687772 logs.go:282] 0 containers: []
	W1223 00:01:57.087114  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:01:57.087183  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:01:57.105193  687772 logs.go:282] 0 containers: []
	W1223 00:01:57.105218  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:01:57.105270  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:01:57.122859  687772 logs.go:282] 0 containers: []
	W1223 00:01:57.122885  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:01:57.122931  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:01:57.141996  687772 logs.go:282] 0 containers: []
	W1223 00:01:57.142021  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:01:57.142074  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:01:57.160005  687772 logs.go:282] 0 containers: []
	W1223 00:01:57.160032  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:01:57.160083  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:01:57.178889  687772 logs.go:282] 0 containers: []
	W1223 00:01:57.178915  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:01:57.178989  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:01:57.196419  687772 logs.go:282] 0 containers: []
	W1223 00:01:57.196446  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:01:57.196498  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:01:57.214764  687772 logs.go:282] 0 containers: []
	W1223 00:01:57.214790  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:01:57.214804  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:01:57.214817  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:01:57.266333  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:01:57.266370  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:01:57.289301  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:01:57.289330  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:01:57.347060  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:01:57.339792    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:57.340363    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:57.341983    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:57.342452    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:57.344014    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:01:57.339792    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:57.340363    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:57.341983    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:57.342452    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:01:57.344014    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:01:57.347099  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:01:57.347117  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:01:57.370222  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:01:57.370257  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:01:58.466074  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:01:58.519779  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1223 00:01:58.519828  687772 retry.go:84] will retry after 42.4s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1223 00:01:58.498177  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:02:00.998033  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:01:59.898063  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:01:59.909410  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:01:59.927950  687772 logs.go:282] 0 containers: []
	W1223 00:01:59.927974  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:01:59.928017  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:01:59.946773  687772 logs.go:282] 0 containers: []
	W1223 00:01:59.946800  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:01:59.946861  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:01:59.964419  687772 logs.go:282] 0 containers: []
	W1223 00:01:59.964443  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:01:59.964500  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:01:59.982454  687772 logs.go:282] 0 containers: []
	W1223 00:01:59.982478  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:01:59.982537  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:00.000838  687772 logs.go:282] 0 containers: []
	W1223 00:02:00.000860  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:00.000929  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:00.018673  687772 logs.go:282] 0 containers: []
	W1223 00:02:00.018696  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:00.018747  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:00.035944  687772 logs.go:282] 0 containers: []
	W1223 00:02:00.035973  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:00.036027  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:00.053640  687772 logs.go:282] 0 containers: []
	W1223 00:02:00.053666  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:00.053679  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:00.053700  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:00.098844  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:00.098870  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:00.120198  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:00.120224  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:00.175459  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:00.168568    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:00.169147    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:00.170692    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:00.171078    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:00.172563    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:00.168568    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:00.169147    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:00.170692    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:00.171078    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:00.172563    4237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:00.175477  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:00.175490  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:00.194106  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:00.194146  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:02.722361  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:02.733721  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:02.753939  687772 logs.go:282] 0 containers: []
	W1223 00:02:02.753963  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:02.754025  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:02.773570  687772 logs.go:282] 0 containers: []
	W1223 00:02:02.773610  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:02.773665  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:02.793427  687772 logs.go:282] 0 containers: []
	W1223 00:02:02.793451  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:02.793514  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:02.812154  687772 logs.go:282] 0 containers: []
	W1223 00:02:02.812183  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:02.812241  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:02.830757  687772 logs.go:282] 0 containers: []
	W1223 00:02:02.830777  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:02.830819  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:02.849140  687772 logs.go:282] 0 containers: []
	W1223 00:02:02.849163  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:02.849206  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:02.867505  687772 logs.go:282] 0 containers: []
	W1223 00:02:02.867529  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:02.867584  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:02.885892  687772 logs.go:282] 0 containers: []
	W1223 00:02:02.885920  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:02.885935  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:02.885950  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:02.933880  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:02.933906  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:02.955273  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:02.955304  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:03.009924  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:03.002806    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:03.003364    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:03.004891    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:03.005360    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:03.006852    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:03.002806    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:03.003364    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:03.004891    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:03.005360    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:03.006852    4408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:03.009951  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:03.010012  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:03.028798  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:03.028821  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1223 00:02:03.497953  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:02:05.997506  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:02:05.557718  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:05.569769  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:05.588873  687772 logs.go:282] 0 containers: []
	W1223 00:02:05.588899  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:05.588946  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:05.607265  687772 logs.go:282] 0 containers: []
	W1223 00:02:05.607289  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:05.607342  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:05.625761  687772 logs.go:282] 0 containers: []
	W1223 00:02:05.625790  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:05.625860  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:05.643443  687772 logs.go:282] 0 containers: []
	W1223 00:02:05.643464  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:05.643513  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:05.661241  687772 logs.go:282] 0 containers: []
	W1223 00:02:05.661266  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:05.661314  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:05.679744  687772 logs.go:282] 0 containers: []
	W1223 00:02:05.679764  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:05.679805  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:05.697808  687772 logs.go:282] 0 containers: []
	W1223 00:02:05.697831  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:05.697878  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:05.716222  687772 logs.go:282] 0 containers: []
	W1223 00:02:05.716245  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:05.716255  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:05.716269  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:05.772404  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:05.772437  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:05.793610  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:05.793643  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:05.849453  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:05.842499    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:05.842938    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:05.844491    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:05.844916    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:05.846469    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:05.842499    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:05.842938    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:05.844491    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:05.844916    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:05.846469    4575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:05.849479  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:05.849493  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:05.868250  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:05.868273  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:05.997019  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1223 00:02:06.048845  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1223 00:02:06.048961  687772 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1223 00:02:08.398283  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:08.409794  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:08.428854  687772 logs.go:282] 0 containers: []
	W1223 00:02:08.428878  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:08.428927  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:08.447223  687772 logs.go:282] 0 containers: []
	W1223 00:02:08.447248  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:08.447292  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:08.464797  687772 logs.go:282] 0 containers: []
	W1223 00:02:08.464816  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:08.464857  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:08.483364  687772 logs.go:282] 0 containers: []
	W1223 00:02:08.483386  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:08.483449  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:08.501985  687772 logs.go:282] 0 containers: []
	W1223 00:02:08.502014  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:08.502064  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:08.520995  687772 logs.go:282] 0 containers: []
	W1223 00:02:08.521020  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:08.521071  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:08.542089  687772 logs.go:282] 0 containers: []
	W1223 00:02:08.542109  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:08.542149  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:08.560448  687772 logs.go:282] 0 containers: []
	W1223 00:02:08.560468  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:08.560478  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:08.560489  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:08.606044  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:08.606073  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:08.626314  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:08.626339  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:08.681455  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:08.674268    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:08.674839    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:08.676419    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:08.676835    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:08.678358    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:08.674268    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:08.674839    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:08.676419    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:08.676835    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:08.678358    4743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:08.681477  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:08.681490  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:08.699804  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:08.699830  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1223 00:02:07.998312  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:02:10.498076  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:02:11.230050  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:11.241320  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:11.260234  687772 logs.go:282] 0 containers: []
	W1223 00:02:11.260256  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:11.260301  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:11.279535  687772 logs.go:282] 0 containers: []
	W1223 00:02:11.279558  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:11.279635  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:11.297775  687772 logs.go:282] 0 containers: []
	W1223 00:02:11.297799  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:11.297843  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:11.315756  687772 logs.go:282] 0 containers: []
	W1223 00:02:11.315780  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:11.315823  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:11.333828  687772 logs.go:282] 0 containers: []
	W1223 00:02:11.333850  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:11.333894  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:11.351608  687772 logs.go:282] 0 containers: []
	W1223 00:02:11.351631  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:11.351673  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:11.371003  687772 logs.go:282] 0 containers: []
	W1223 00:02:11.371024  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:11.371065  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:11.388642  687772 logs.go:282] 0 containers: []
	W1223 00:02:11.388662  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:11.388671  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:11.388681  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:11.406664  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:11.406687  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:11.434828  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:11.434851  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:11.481940  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:11.481966  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:11.503051  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:11.503077  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:11.563912  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:11.555767    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:11.556288    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:11.558989    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:11.559407    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:11.560934    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:11.555767    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:11.556288    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:11.558989    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:11.559407    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:11.560934    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:14.064094  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1223 00:02:12.997638  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	W1223 00:02:15.497547  622784 node_ready.go:55] error getting node "no-preload-063943" condition "Ready" status (will retry): Get "https://192.168.103.2:8443/api/v1/nodes/no-preload-063943": dial tcp 192.168.103.2:8443: connect: connection refused
	I1223 00:02:15.997757  622784 node_ready.go:38] duration metric: took 6m0.000870759s for node "no-preload-063943" to be "Ready" ...
	I1223 00:02:15.999489  622784 out.go:203] 
	W1223 00:02:16.002745  622784 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1223 00:02:16.002767  622784 out.go:285] * 
	W1223 00:02:16.002971  622784 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1223 00:02:16.004060  622784 out.go:203] 
	I1223 00:02:14.075388  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:14.094051  687772 logs.go:282] 0 containers: []
	W1223 00:02:14.094075  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:14.094123  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:14.112428  687772 logs.go:282] 0 containers: []
	W1223 00:02:14.112454  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:14.112511  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:14.130910  687772 logs.go:282] 0 containers: []
	W1223 00:02:14.130935  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:14.130991  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:14.149172  687772 logs.go:282] 0 containers: []
	W1223 00:02:14.149194  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:14.149247  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:14.167387  687772 logs.go:282] 0 containers: []
	W1223 00:02:14.167414  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:14.167470  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:14.187009  687772 logs.go:282] 0 containers: []
	W1223 00:02:14.187034  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:14.187080  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:14.205514  687772 logs.go:282] 0 containers: []
	W1223 00:02:14.205537  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:14.205604  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:14.223867  687772 logs.go:282] 0 containers: []
	W1223 00:02:14.223893  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:14.223906  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:14.223919  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:14.278850  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:14.272061    5073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:14.272519    5073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:14.274102    5073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:14.274491    5073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:14.275975    5073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:14.272061    5073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:14.272519    5073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:14.274102    5073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:14.274491    5073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:14.275975    5073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:14.278877  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:14.278904  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:14.297791  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:14.297817  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:14.329010  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:14.329035  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:14.375196  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:14.375228  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:16.895760  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:16.908501  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:16.928330  687772 logs.go:282] 0 containers: []
	W1223 00:02:16.928357  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:16.928403  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:16.947248  687772 logs.go:282] 0 containers: []
	W1223 00:02:16.947272  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:16.947319  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:16.967240  687772 logs.go:282] 0 containers: []
	W1223 00:02:16.967266  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:16.967318  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:16.986942  687772 logs.go:282] 0 containers: []
	W1223 00:02:16.986966  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:16.987025  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:17.008674  687772 logs.go:282] 0 containers: []
	W1223 00:02:17.008702  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:17.008760  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:17.030466  687772 logs.go:282] 0 containers: []
	W1223 00:02:17.030492  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:17.030548  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:17.051687  687772 logs.go:282] 0 containers: []
	W1223 00:02:17.051719  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:17.051773  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:17.073457  687772 logs.go:282] 0 containers: []
	W1223 00:02:17.073486  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:17.073502  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:17.073521  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:17.131973  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:17.132010  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:17.157397  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:17.157433  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:17.217639  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:17.209809    5249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:17.210419    5249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:17.212231    5249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:17.212725    5249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:17.214466    5249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:17.209809    5249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:17.210419    5249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:17.212231    5249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:17.212725    5249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:17.214466    5249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:17.217669  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:17.217683  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:17.239498  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:17.239530  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:19.769550  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:19.782360  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:19.802423  687772 logs.go:282] 0 containers: []
	W1223 00:02:19.802446  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:19.802497  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:19.821183  687772 logs.go:282] 0 containers: []
	W1223 00:02:19.821214  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:19.821269  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:19.840343  687772 logs.go:282] 0 containers: []
	W1223 00:02:19.840369  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:19.840426  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:19.857810  687772 logs.go:282] 0 containers: []
	W1223 00:02:19.857835  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:19.857878  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:19.875458  687772 logs.go:282] 0 containers: []
	W1223 00:02:19.875481  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:19.875523  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:19.893840  687772 logs.go:282] 0 containers: []
	W1223 00:02:19.893864  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:19.893916  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:19.912030  687772 logs.go:282] 0 containers: []
	W1223 00:02:19.912053  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:19.912094  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:19.930049  687772 logs.go:282] 0 containers: []
	W1223 00:02:19.930066  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:19.930077  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:19.930088  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:19.976279  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:19.976304  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:19.995814  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:19.995837  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:20.054797  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:20.046679    5408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:20.047269    5408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:20.048969    5408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:20.049570    5408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:20.051329    5408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:20.046679    5408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:20.047269    5408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:20.048969    5408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:20.049570    5408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:20.051329    5408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:20.054819  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:20.054833  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:20.074562  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:20.074588  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:20.651032  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1223 00:02:20.702678  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1223 00:02:20.702795  687772 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1223 00:02:22.602868  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:22.614420  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:22.633871  687772 logs.go:282] 0 containers: []
	W1223 00:02:22.633892  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:22.633942  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:22.652376  687772 logs.go:282] 0 containers: []
	W1223 00:02:22.652403  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:22.652454  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:22.670318  687772 logs.go:282] 0 containers: []
	W1223 00:02:22.670340  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:22.670384  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:22.688893  687772 logs.go:282] 0 containers: []
	W1223 00:02:22.688913  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:22.688966  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:22.707579  687772 logs.go:282] 0 containers: []
	W1223 00:02:22.707614  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:22.707667  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:22.726147  687772 logs.go:282] 0 containers: []
	W1223 00:02:22.726174  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:22.726230  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:22.744895  687772 logs.go:282] 0 containers: []
	W1223 00:02:22.744919  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:22.744975  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:22.765807  687772 logs.go:282] 0 containers: []
	W1223 00:02:22.765834  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:22.765848  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:22.765858  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:22.786075  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:22.786111  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:22.814010  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:22.814034  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:22.859717  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:22.859741  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:22.878865  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:22.878889  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:22.933790  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:22.926530    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:22.927059    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:22.928672    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:22.929122    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:22.930663    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:22.926530    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:22.927059    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:22.928672    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:22.929122    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:22.930663    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:25.434500  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:25.446396  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:25.466157  687772 logs.go:282] 0 containers: []
	W1223 00:02:25.466184  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:25.466237  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:25.484799  687772 logs.go:282] 0 containers: []
	W1223 00:02:25.484827  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:25.484899  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:25.503442  687772 logs.go:282] 0 containers: []
	W1223 00:02:25.503470  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:25.503516  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:25.522088  687772 logs.go:282] 0 containers: []
	W1223 00:02:25.522114  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:25.522174  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:25.540899  687772 logs.go:282] 0 containers: []
	W1223 00:02:25.540924  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:25.540979  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:25.559853  687772 logs.go:282] 0 containers: []
	W1223 00:02:25.559877  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:25.559929  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:25.578537  687772 logs.go:282] 0 containers: []
	W1223 00:02:25.578560  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:25.578619  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:25.597442  687772 logs.go:282] 0 containers: []
	W1223 00:02:25.597465  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:25.597476  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:25.597491  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:25.617688  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:25.617718  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:25.672737  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:25.665784    5752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:25.666239    5752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:25.667786    5752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:25.668269    5752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:25.669751    5752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:25.665784    5752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:25.666239    5752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:25.667786    5752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:25.668269    5752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:25.669751    5752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:25.672761  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:25.672777  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:25.691559  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:25.691585  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:25.719893  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:25.719918  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:28.271777  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:28.284248  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:28.304042  687772 logs.go:282] 0 containers: []
	W1223 00:02:28.304069  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:28.304126  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:28.322682  687772 logs.go:282] 0 containers: []
	W1223 00:02:28.322711  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:28.322769  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:28.340899  687772 logs.go:282] 0 containers: []
	W1223 00:02:28.340925  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:28.340974  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:28.359896  687772 logs.go:282] 0 containers: []
	W1223 00:02:28.359922  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:28.359976  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:28.378627  687772 logs.go:282] 0 containers: []
	W1223 00:02:28.378650  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:28.378700  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:28.396793  687772 logs.go:282] 0 containers: []
	W1223 00:02:28.396821  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:28.396870  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:28.415408  687772 logs.go:282] 0 containers: []
	W1223 00:02:28.415434  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:28.415480  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:28.434108  687772 logs.go:282] 0 containers: []
	W1223 00:02:28.434131  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:28.434142  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:28.434153  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:28.462377  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:28.462405  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:28.509046  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:28.509080  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:28.531034  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:28.531065  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:28.587866  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:28.580703    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:28.581207    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:28.582777    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:28.583174    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:28.584683    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:28.580703    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:28.581207    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:28.582777    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:28.583174    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:28.584683    5937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:28.587904  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:28.587920  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:31.109730  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:31.121215  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:31.140775  687772 logs.go:282] 0 containers: []
	W1223 00:02:31.140799  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:31.140853  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:31.160694  687772 logs.go:282] 0 containers: []
	W1223 00:02:31.160719  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:31.160766  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:31.180064  687772 logs.go:282] 0 containers: []
	W1223 00:02:31.180087  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:31.180133  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:31.198777  687772 logs.go:282] 0 containers: []
	W1223 00:02:31.198802  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:31.198856  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:31.217848  687772 logs.go:282] 0 containers: []
	W1223 00:02:31.217875  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:31.217923  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:31.237167  687772 logs.go:282] 0 containers: []
	W1223 00:02:31.237196  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:31.237251  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:31.257964  687772 logs.go:282] 0 containers: []
	W1223 00:02:31.257995  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:31.258056  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:31.279556  687772 logs.go:282] 0 containers: []
	W1223 00:02:31.279581  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:31.279607  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:31.279624  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:31.336644  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:31.329447    6086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:31.329953    6086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:31.331523    6086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:31.332012    6086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:31.333520    6086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:31.329447    6086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:31.329953    6086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:31.331523    6086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:31.332012    6086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:31.333520    6086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:31.336664  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:31.336675  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:31.355102  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:31.355129  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:31.384063  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:31.384096  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:31.429299  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:31.429337  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:33.951226  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:33.962558  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:33.981280  687772 logs.go:282] 0 containers: []
	W1223 00:02:33.981301  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:33.981353  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:34.000326  687772 logs.go:282] 0 containers: []
	W1223 00:02:34.000351  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:34.000417  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:34.020043  687772 logs.go:282] 0 containers: []
	W1223 00:02:34.020069  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:34.020114  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:34.042279  687772 logs.go:282] 0 containers: []
	W1223 00:02:34.042304  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:34.042363  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:34.060550  687772 logs.go:282] 0 containers: []
	W1223 00:02:34.060571  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:34.060631  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:34.078917  687772 logs.go:282] 0 containers: []
	W1223 00:02:34.078939  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:34.078986  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:34.098151  687772 logs.go:282] 0 containers: []
	W1223 00:02:34.098177  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:34.098224  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:34.117100  687772 logs.go:282] 0 containers: []
	W1223 00:02:34.117124  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:34.117137  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:34.117153  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:34.138330  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:34.138358  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:34.193562  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:34.186453    6246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:34.187024    6246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:34.188567    6246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:34.188984    6246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:34.190534    6246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:34.186453    6246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:34.187024    6246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:34.188567    6246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:34.188984    6246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:34.190534    6246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:34.193588  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:34.193615  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:34.212264  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:34.212288  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:34.240368  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:34.240399  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:36.793206  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:36.804783  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:36.823535  687772 logs.go:282] 0 containers: []
	W1223 00:02:36.823556  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:36.823618  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:36.841856  687772 logs.go:282] 0 containers: []
	W1223 00:02:36.841879  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:36.841933  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:36.860292  687772 logs.go:282] 0 containers: []
	W1223 00:02:36.860319  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:36.860360  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:36.878691  687772 logs.go:282] 0 containers: []
	W1223 00:02:36.878719  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:36.878773  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:36.897448  687772 logs.go:282] 0 containers: []
	W1223 00:02:36.897472  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:36.897519  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:36.916562  687772 logs.go:282] 0 containers: []
	W1223 00:02:36.916585  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:36.916654  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:36.934784  687772 logs.go:282] 0 containers: []
	W1223 00:02:36.934807  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:36.934865  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:36.953285  687772 logs.go:282] 0 containers: []
	W1223 00:02:36.953305  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:36.953317  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:36.953328  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:37.000978  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:37.001008  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:37.021185  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:37.021217  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:37.081314  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:37.074072    6419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:37.074651    6419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:37.076251    6419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:37.076693    6419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:37.078295    6419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:37.074072    6419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:37.074651    6419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:37.076251    6419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:37.076693    6419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:37.078295    6419 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:37.081345  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:37.081366  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:37.100453  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:37.100480  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:39.629693  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:39.641060  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:39.660163  687772 logs.go:282] 0 containers: []
	W1223 00:02:39.660187  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:39.660232  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:39.680357  687772 logs.go:282] 0 containers: []
	W1223 00:02:39.680379  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:39.680422  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:39.699821  687772 logs.go:282] 0 containers: []
	W1223 00:02:39.699853  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:39.699916  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:39.719383  687772 logs.go:282] 0 containers: []
	W1223 00:02:39.719407  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:39.719460  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:39.739699  687772 logs.go:282] 0 containers: []
	W1223 00:02:39.739726  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:39.739800  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:39.758766  687772 logs.go:282] 0 containers: []
	W1223 00:02:39.758791  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:39.758849  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:39.777656  687772 logs.go:282] 0 containers: []
	W1223 00:02:39.777690  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:39.777752  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:39.796962  687772 logs.go:282] 0 containers: []
	W1223 00:02:39.796984  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:39.796995  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:39.797006  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:39.842320  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:39.842347  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:39.862054  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:39.862080  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:39.916930  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:39.910012    6584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:39.910586    6584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:39.912148    6584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:39.912544    6584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:39.913971    6584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:39.910012    6584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:39.910586    6584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:39.912148    6584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:39.912544    6584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:39.913971    6584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:39.916953  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:39.916970  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:39.935277  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:39.935306  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:40.946301  687772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1223 00:02:41.000005  687772 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1223 00:02:41.000109  687772 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1223 00:02:41.001884  687772 out.go:179] * Enabled addons: 
	I1223 00:02:41.002846  687772 addons.go:530] duration metric: took 1m58.614813363s for enable addons: enabled=[]
	I1223 00:02:42.463498  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:42.474861  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:42.493733  687772 logs.go:282] 0 containers: []
	W1223 00:02:42.493756  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:42.493806  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:42.513344  687772 logs.go:282] 0 containers: []
	W1223 00:02:42.513376  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:42.513436  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:42.537617  687772 logs.go:282] 0 containers: []
	W1223 00:02:42.537647  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:42.537701  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:42.557673  687772 logs.go:282] 0 containers: []
	W1223 00:02:42.557698  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:42.557746  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:42.576567  687772 logs.go:282] 0 containers: []
	W1223 00:02:42.576604  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:42.576669  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:42.595813  687772 logs.go:282] 0 containers: []
	W1223 00:02:42.595836  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:42.595890  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:42.615074  687772 logs.go:282] 0 containers: []
	W1223 00:02:42.615101  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:42.615154  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:42.634655  687772 logs.go:282] 0 containers: []
	W1223 00:02:42.634685  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:42.634702  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:42.634719  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:42.654826  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:42.654852  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:42.710552  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:42.703396    6762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:42.703921    6762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:42.705447    6762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:42.705896    6762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:42.707365    6762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:42.703396    6762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:42.703921    6762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:42.705447    6762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:42.705896    6762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:42.707365    6762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:42.710573  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:42.710585  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:42.729412  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:42.729439  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:42.758163  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:42.758187  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:45.306682  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:45.318226  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:45.337265  687772 logs.go:282] 0 containers: []
	W1223 00:02:45.337287  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:45.337343  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:45.355924  687772 logs.go:282] 0 containers: []
	W1223 00:02:45.355945  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:45.355990  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:45.374282  687772 logs.go:282] 0 containers: []
	W1223 00:02:45.374303  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:45.374348  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:45.394500  687772 logs.go:282] 0 containers: []
	W1223 00:02:45.394533  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:45.394584  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:45.412466  687772 logs.go:282] 0 containers: []
	W1223 00:02:45.412489  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:45.412538  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:45.431148  687772 logs.go:282] 0 containers: []
	W1223 00:02:45.431185  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:45.431234  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:45.450281  687772 logs.go:282] 0 containers: []
	W1223 00:02:45.450303  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:45.450352  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:45.468758  687772 logs.go:282] 0 containers: []
	W1223 00:02:45.468787  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:45.468804  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:45.468818  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:45.520708  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:45.520742  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:45.542983  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:45.543013  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:45.598778  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:45.591672    6934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:45.592201    6934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:45.593887    6934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:45.594424    6934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:45.595931    6934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:45.591672    6934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:45.592201    6934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:45.593887    6934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:45.594424    6934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:45.595931    6934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:45.598798  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:45.598812  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:45.617903  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:45.617931  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:48.156370  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:48.167842  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:48.187202  687772 logs.go:282] 0 containers: []
	W1223 00:02:48.187224  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:48.187268  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:48.206448  687772 logs.go:282] 0 containers: []
	W1223 00:02:48.206471  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:48.206516  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:48.225302  687772 logs.go:282] 0 containers: []
	W1223 00:02:48.225322  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:48.225373  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:48.244155  687772 logs.go:282] 0 containers: []
	W1223 00:02:48.244185  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:48.244245  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:48.264312  687772 logs.go:282] 0 containers: []
	W1223 00:02:48.264350  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:48.264418  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:48.284233  687772 logs.go:282] 0 containers: []
	W1223 00:02:48.284260  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:48.284317  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:48.303899  687772 logs.go:282] 0 containers: []
	W1223 00:02:48.303924  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:48.303973  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:48.324302  687772 logs.go:282] 0 containers: []
	W1223 00:02:48.324335  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:48.324350  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:48.324366  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:48.345435  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:48.345463  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:48.402949  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:48.395302    7094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:48.395834    7094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:48.397506    7094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:48.398024    7094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:48.399634    7094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:48.395302    7094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:48.395834    7094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:48.397506    7094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:48.398024    7094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:48.399634    7094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:48.402972  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:48.402984  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:48.423927  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:48.423954  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:48.452771  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:48.452799  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:51.001239  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:51.013175  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:51.032822  687772 logs.go:282] 0 containers: []
	W1223 00:02:51.032846  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:51.032898  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:51.051652  687772 logs.go:282] 0 containers: []
	W1223 00:02:51.051682  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:51.051724  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:51.070373  687772 logs.go:282] 0 containers: []
	W1223 00:02:51.070395  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:51.070448  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:51.088655  687772 logs.go:282] 0 containers: []
	W1223 00:02:51.088676  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:51.088732  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:51.108004  687772 logs.go:282] 0 containers: []
	W1223 00:02:51.108025  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:51.108078  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:51.126636  687772 logs.go:282] 0 containers: []
	W1223 00:02:51.126662  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:51.126728  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:51.145355  687772 logs.go:282] 0 containers: []
	W1223 00:02:51.145385  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:51.145451  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:51.164355  687772 logs.go:282] 0 containers: []
	W1223 00:02:51.164384  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:51.164396  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:51.164409  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:51.191698  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:51.191724  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:51.238383  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:51.238411  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:51.260545  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:51.260580  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:51.318147  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:51.310651    7281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:51.311192    7281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:51.312772    7281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:51.313264    7281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:51.314877    7281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:51.310651    7281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:51.311192    7281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:51.312772    7281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:51.313264    7281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:51.314877    7281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:51.318168  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:51.318182  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:53.838848  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:53.850007  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:53.868584  687772 logs.go:282] 0 containers: []
	W1223 00:02:53.868622  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:53.868663  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:53.887617  687772 logs.go:282] 0 containers: []
	W1223 00:02:53.887640  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:53.887687  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:53.906384  687772 logs.go:282] 0 containers: []
	W1223 00:02:53.906409  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:53.906453  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:53.924912  687772 logs.go:282] 0 containers: []
	W1223 00:02:53.924938  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:53.924988  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:53.943400  687772 logs.go:282] 0 containers: []
	W1223 00:02:53.943425  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:53.943477  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:53.961941  687772 logs.go:282] 0 containers: []
	W1223 00:02:53.961969  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:53.962024  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:53.980915  687772 logs.go:282] 0 containers: []
	W1223 00:02:53.980941  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:53.980987  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:53.998798  687772 logs.go:282] 0 containers: []
	W1223 00:02:53.998817  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:53.998827  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:53.998839  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:54.017064  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:54.017089  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:54.045091  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:54.045114  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:54.090278  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:54.090307  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:54.111890  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:54.111920  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:54.166797  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:54.159777    7452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:54.160366    7452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:54.161942    7452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:54.162343    7452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:54.163852    7452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:54.159777    7452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:54.160366    7452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:54.161942    7452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:54.162343    7452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:54.163852    7452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:56.668571  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:56.680147  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:56.699018  687772 logs.go:282] 0 containers: []
	W1223 00:02:56.699042  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:56.699093  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:56.716996  687772 logs.go:282] 0 containers: []
	W1223 00:02:56.717019  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:56.717068  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:56.735529  687772 logs.go:282] 0 containers: []
	W1223 00:02:56.735565  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:56.735644  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:56.756677  687772 logs.go:282] 0 containers: []
	W1223 00:02:56.756701  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:56.756757  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:56.777819  687772 logs.go:282] 0 containers: []
	W1223 00:02:56.777850  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:56.777905  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:56.799967  687772 logs.go:282] 0 containers: []
	W1223 00:02:56.799997  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:56.800054  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:56.818811  687772 logs.go:282] 0 containers: []
	W1223 00:02:56.818836  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:56.818881  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:56.837426  687772 logs.go:282] 0 containers: []
	W1223 00:02:56.837461  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:56.837473  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:56.837487  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:56.893850  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:56.886682    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:56.887224    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:56.888853    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:56.889345    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:56.890925    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:56.886682    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:56.887224    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:56.888853    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:56.889345    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:56.890925    7600 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:56.893879  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:56.893894  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:56.912125  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:56.912151  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:02:56.939250  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:56.939279  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:56.986566  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:56.986599  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:59.506330  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:02:59.518294  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:02:59.540502  687772 logs.go:282] 0 containers: []
	W1223 00:02:59.540529  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:02:59.540586  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:02:59.559288  687772 logs.go:282] 0 containers: []
	W1223 00:02:59.559322  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:02:59.559372  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:02:59.577919  687772 logs.go:282] 0 containers: []
	W1223 00:02:59.577945  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:02:59.578002  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:02:59.596632  687772 logs.go:282] 0 containers: []
	W1223 00:02:59.596655  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:02:59.596705  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:02:59.614750  687772 logs.go:282] 0 containers: []
	W1223 00:02:59.614775  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:02:59.614826  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:02:59.632989  687772 logs.go:282] 0 containers: []
	W1223 00:02:59.633007  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:02:59.633057  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:02:59.650953  687772 logs.go:282] 0 containers: []
	W1223 00:02:59.650972  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:02:59.651020  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:02:59.669171  687772 logs.go:282] 0 containers: []
	W1223 00:02:59.669190  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:02:59.669202  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:02:59.669214  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:02:59.713997  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:02:59.714026  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:02:59.733682  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:02:59.733709  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:02:59.801000  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:02:59.792717    7761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:59.793343    7761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:59.796120    7761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:59.796686    7761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:59.797955    7761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:02:59.792717    7761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:59.793343    7761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:59.796120    7761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:59.796686    7761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:02:59.797955    7761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:02:59.801018  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:02:59.801029  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:02:59.819988  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:02:59.820018  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:02.350019  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:02.361484  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:02.380765  687772 logs.go:282] 0 containers: []
	W1223 00:03:02.380793  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:02.380841  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:02.398822  687772 logs.go:282] 0 containers: []
	W1223 00:03:02.398847  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:02.398892  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:02.416468  687772 logs.go:282] 0 containers: []
	W1223 00:03:02.416488  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:02.416530  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:02.435155  687772 logs.go:282] 0 containers: []
	W1223 00:03:02.435182  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:02.435237  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:02.453935  687772 logs.go:282] 0 containers: []
	W1223 00:03:02.453961  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:02.454012  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:02.472347  687772 logs.go:282] 0 containers: []
	W1223 00:03:02.472376  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:02.472445  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:02.490480  687772 logs.go:282] 0 containers: []
	W1223 00:03:02.490505  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:02.490562  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:02.510458  687772 logs.go:282] 0 containers: []
	W1223 00:03:02.510485  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:02.510498  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:02.510509  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:02.541744  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:02.541769  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:02.587578  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:02.587619  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:02.607135  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:02.607161  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:02.663082  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:02.655098    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:02.655704    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:02.657931    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:02.658437    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:02.660015    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:02.655098    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:02.655704    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:02.657931    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:02.658437    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:02.660015    7942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:02.663104  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:02.663117  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:05.182740  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:05.194033  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:05.212783  687772 logs.go:282] 0 containers: []
	W1223 00:03:05.212809  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:05.212868  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:05.230615  687772 logs.go:282] 0 containers: []
	W1223 00:03:05.230643  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:05.230687  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:05.249068  687772 logs.go:282] 0 containers: []
	W1223 00:03:05.249091  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:05.249140  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:05.268884  687772 logs.go:282] 0 containers: []
	W1223 00:03:05.268913  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:05.268965  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:05.288077  687772 logs.go:282] 0 containers: []
	W1223 00:03:05.288103  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:05.288159  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:05.306886  687772 logs.go:282] 0 containers: []
	W1223 00:03:05.306916  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:05.306970  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:05.325552  687772 logs.go:282] 0 containers: []
	W1223 00:03:05.325579  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:05.325644  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:05.344222  687772 logs.go:282] 0 containers: []
	W1223 00:03:05.344252  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:05.344264  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:05.344276  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:05.389222  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:05.389252  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:05.409357  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:05.409384  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:05.466244  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:05.459201    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:05.459757    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:05.461346    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:05.461781    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:05.463277    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:05.459201    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:05.459757    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:05.461346    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:05.461781    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:05.463277    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:05.466269  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:05.466285  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:05.484803  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:05.484830  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:08.013719  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:08.026534  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:08.046545  687772 logs.go:282] 0 containers: []
	W1223 00:03:08.046567  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:08.046633  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:08.065353  687772 logs.go:282] 0 containers: []
	W1223 00:03:08.065375  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:08.065423  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:08.084081  687772 logs.go:282] 0 containers: []
	W1223 00:03:08.084109  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:08.084156  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:08.102488  687772 logs.go:282] 0 containers: []
	W1223 00:03:08.102514  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:08.102570  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:08.121317  687772 logs.go:282] 0 containers: []
	W1223 00:03:08.121347  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:08.121391  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:08.139209  687772 logs.go:282] 0 containers: []
	W1223 00:03:08.139232  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:08.139282  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:08.157445  687772 logs.go:282] 0 containers: []
	W1223 00:03:08.157465  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:08.157510  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:08.177073  687772 logs.go:282] 0 containers: []
	W1223 00:03:08.177101  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:08.177115  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:08.177131  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:08.195188  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:08.195222  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:08.223256  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:08.223282  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:08.270668  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:08.270696  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:08.290331  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:08.290355  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:08.344801  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:08.337927    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:08.338445    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:08.340039    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:08.340476    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:08.341971    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:08.337927    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:08.338445    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:08.340039    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:08.340476    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:08.341971    8285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:10.846497  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:10.857798  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:10.876797  687772 logs.go:282] 0 containers: []
	W1223 00:03:10.876818  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:10.876863  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:10.895838  687772 logs.go:282] 0 containers: []
	W1223 00:03:10.895862  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:10.895907  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:10.913971  687772 logs.go:282] 0 containers: []
	W1223 00:03:10.913996  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:10.914038  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:10.932422  687772 logs.go:282] 0 containers: []
	W1223 00:03:10.932449  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:10.932501  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:10.951013  687772 logs.go:282] 0 containers: []
	W1223 00:03:10.951034  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:10.951076  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:10.969170  687772 logs.go:282] 0 containers: []
	W1223 00:03:10.969198  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:10.969242  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:10.988274  687772 logs.go:282] 0 containers: []
	W1223 00:03:10.988332  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:10.988382  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:11.006849  687772 logs.go:282] 0 containers: []
	W1223 00:03:11.006875  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:11.006889  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:11.006906  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:11.059569  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:11.059619  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:11.079808  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:11.079835  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:11.134768  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:11.127709    8435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:11.128312    8435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:11.129883    8435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:11.130324    8435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:11.131860    8435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:11.127709    8435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:11.128312    8435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:11.129883    8435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:11.130324    8435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:11.131860    8435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:11.134794  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:11.134817  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:11.153181  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:11.153207  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:13.681510  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:13.692957  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:13.711987  687772 logs.go:282] 0 containers: []
	W1223 00:03:13.712017  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:13.712069  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:13.730999  687772 logs.go:282] 0 containers: []
	W1223 00:03:13.731026  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:13.731083  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:13.753677  687772 logs.go:282] 0 containers: []
	W1223 00:03:13.753709  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:13.753769  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:13.779299  687772 logs.go:282] 0 containers: []
	W1223 00:03:13.779328  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:13.779389  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:13.800195  687772 logs.go:282] 0 containers: []
	W1223 00:03:13.800223  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:13.800269  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:13.818836  687772 logs.go:282] 0 containers: []
	W1223 00:03:13.818861  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:13.818905  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:13.837265  687772 logs.go:282] 0 containers: []
	W1223 00:03:13.837293  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:13.837349  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:13.855911  687772 logs.go:282] 0 containers: []
	W1223 00:03:13.855934  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:13.855944  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:13.855963  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:13.877413  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:13.877442  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:13.932902  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:13.925709    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:13.926137    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:13.927723    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:13.928122    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:13.929665    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:13.925709    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:13.926137    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:13.927723    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:13.928122    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:13.929665    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:13.932922  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:13.932935  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:13.951430  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:13.951455  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:13.979434  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:13.979463  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:16.528395  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:16.539658  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:16.558721  687772 logs.go:282] 0 containers: []
	W1223 00:03:16.558746  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:16.558802  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:16.577097  687772 logs.go:282] 0 containers: []
	W1223 00:03:16.577122  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:16.577169  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:16.594944  687772 logs.go:282] 0 containers: []
	W1223 00:03:16.594973  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:16.595021  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:16.612956  687772 logs.go:282] 0 containers: []
	W1223 00:03:16.612982  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:16.613028  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:16.631601  687772 logs.go:282] 0 containers: []
	W1223 00:03:16.631626  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:16.631689  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:16.650054  687772 logs.go:282] 0 containers: []
	W1223 00:03:16.650077  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:16.650125  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:16.668847  687772 logs.go:282] 0 containers: []
	W1223 00:03:16.668868  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:16.668912  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:16.686862  687772 logs.go:282] 0 containers: []
	W1223 00:03:16.686892  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:16.686906  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:16.686923  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:16.743145  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:16.736059    8755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:16.736637    8755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:16.738192    8755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:16.738624    8755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:16.740098    8755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:16.736059    8755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:16.736637    8755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:16.738192    8755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:16.738624    8755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:16.740098    8755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:16.743166  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:16.743178  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:16.762565  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:16.762607  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:16.794528  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:16.794556  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:16.840343  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:16.840372  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:19.362509  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:19.374211  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:19.393192  687772 logs.go:282] 0 containers: []
	W1223 00:03:19.393216  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:19.393268  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:19.412437  687772 logs.go:282] 0 containers: []
	W1223 00:03:19.412465  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:19.412523  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:19.432373  687772 logs.go:282] 0 containers: []
	W1223 00:03:19.432401  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:19.432460  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:19.452125  687772 logs.go:282] 0 containers: []
	W1223 00:03:19.452159  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:19.452217  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:19.471301  687772 logs.go:282] 0 containers: []
	W1223 00:03:19.471328  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:19.471374  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:19.490544  687772 logs.go:282] 0 containers: []
	W1223 00:03:19.490571  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:19.490643  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:19.510487  687772 logs.go:282] 0 containers: []
	W1223 00:03:19.510508  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:19.510559  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:19.529060  687772 logs.go:282] 0 containers: []
	W1223 00:03:19.529084  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:19.529097  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:19.529112  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:19.574443  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:19.574473  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:19.594488  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:19.594517  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:19.649890  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:19.642446    8930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:19.643046    8930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:19.644619    8930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:19.645108    8930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:19.646648    8930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:19.642446    8930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:19.643046    8930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:19.644619    8930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:19.645108    8930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:19.646648    8930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:19.649910  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:19.649923  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:19.668626  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:19.668651  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:22.198480  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:22.210881  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:22.230438  687772 logs.go:282] 0 containers: []
	W1223 00:03:22.230462  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:22.230522  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:22.248861  687772 logs.go:282] 0 containers: []
	W1223 00:03:22.248882  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:22.248922  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:22.268466  687772 logs.go:282] 0 containers: []
	W1223 00:03:22.268499  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:22.268557  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:22.289199  687772 logs.go:282] 0 containers: []
	W1223 00:03:22.289223  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:22.289268  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:22.307380  687772 logs.go:282] 0 containers: []
	W1223 00:03:22.307405  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:22.307470  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:22.324678  687772 logs.go:282] 0 containers: []
	W1223 00:03:22.324704  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:22.324763  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:22.343704  687772 logs.go:282] 0 containers: []
	W1223 00:03:22.343736  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:22.343791  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:22.362087  687772 logs.go:282] 0 containers: []
	W1223 00:03:22.362117  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:22.362137  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:22.362150  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:22.409818  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:22.409877  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:22.430134  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:22.430165  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:22.485643  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:22.478699    9095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:22.479219    9095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:22.480800    9095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:22.481236    9095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:22.482717    9095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:22.478699    9095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:22.479219    9095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:22.480800    9095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:22.481236    9095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:22.482717    9095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:22.485664  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:22.485680  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:22.504121  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:22.504150  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:25.031881  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:25.043513  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:25.063145  687772 logs.go:282] 0 containers: []
	W1223 00:03:25.063167  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:25.063211  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:25.082000  687772 logs.go:282] 0 containers: []
	W1223 00:03:25.082025  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:25.082074  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:25.099962  687772 logs.go:282] 0 containers: []
	W1223 00:03:25.099984  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:25.100038  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:25.118454  687772 logs.go:282] 0 containers: []
	W1223 00:03:25.118479  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:25.118537  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:25.136993  687772 logs.go:282] 0 containers: []
	W1223 00:03:25.137020  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:25.137069  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:25.155902  687772 logs.go:282] 0 containers: []
	W1223 00:03:25.155925  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:25.155974  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:25.175659  687772 logs.go:282] 0 containers: []
	W1223 00:03:25.175683  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:25.175737  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:25.194139  687772 logs.go:282] 0 containers: []
	W1223 00:03:25.194167  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:25.194180  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:25.194193  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:25.240226  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:25.240258  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:25.261339  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:25.261367  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:25.320736  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:25.313498    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:25.314072    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:25.315614    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:25.316005    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:25.317501    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:25.313498    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:25.314072    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:25.315614    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:25.316005    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:25.317501    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:25.320756  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:25.320768  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:25.341035  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:25.341064  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:27.870845  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:27.882071  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:27.901298  687772 logs.go:282] 0 containers: []
	W1223 00:03:27.901323  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:27.901382  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:27.919859  687772 logs.go:282] 0 containers: []
	W1223 00:03:27.919880  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:27.919930  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:27.938496  687772 logs.go:282] 0 containers: []
	W1223 00:03:27.938520  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:27.938563  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:27.956888  687772 logs.go:282] 0 containers: []
	W1223 00:03:27.956916  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:27.956972  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:27.975342  687772 logs.go:282] 0 containers: []
	W1223 00:03:27.975362  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:27.975412  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:27.994015  687772 logs.go:282] 0 containers: []
	W1223 00:03:27.994038  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:27.994082  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:28.013037  687772 logs.go:282] 0 containers: []
	W1223 00:03:28.013065  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:28.013125  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:28.033210  687772 logs.go:282] 0 containers: []
	W1223 00:03:28.033234  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:28.033247  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:28.033262  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:28.078861  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:28.078892  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:28.098865  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:28.098890  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:28.154165  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:28.147100    9422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:28.147650    9422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:28.149204    9422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:28.149685    9422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:28.151156    9422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:28.147100    9422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:28.147650    9422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:28.149204    9422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:28.149685    9422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:28.151156    9422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:28.154185  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:28.154197  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:28.172425  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:28.172454  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:30.702937  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:30.714537  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:30.735323  687772 logs.go:282] 0 containers: []
	W1223 00:03:30.735346  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:30.735411  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:30.754342  687772 logs.go:282] 0 containers: []
	W1223 00:03:30.754364  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:30.754416  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:30.773486  687772 logs.go:282] 0 containers: []
	W1223 00:03:30.773513  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:30.773570  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:30.792473  687772 logs.go:282] 0 containers: []
	W1223 00:03:30.792498  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:30.792554  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:30.810955  687772 logs.go:282] 0 containers: []
	W1223 00:03:30.810981  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:30.811028  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:30.829795  687772 logs.go:282] 0 containers: []
	W1223 00:03:30.829816  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:30.829864  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:30.848939  687772 logs.go:282] 0 containers: []
	W1223 00:03:30.848959  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:30.849000  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:30.867397  687772 logs.go:282] 0 containers: []
	W1223 00:03:30.867423  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:30.867435  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:30.867452  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:30.887088  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:30.887116  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:30.942084  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:30.934885    9589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:30.935453    9589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:30.937022    9589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:30.937466    9589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:30.938978    9589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:30.934885    9589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:30.935453    9589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:30.937022    9589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:30.937466    9589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:30.938978    9589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:30.942116  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:30.942130  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:30.960703  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:30.960730  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:30.988334  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:30.988359  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:33.539710  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:33.551147  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:33.569876  687772 logs.go:282] 0 containers: []
	W1223 00:03:33.569899  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:33.569943  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:33.588678  687772 logs.go:282] 0 containers: []
	W1223 00:03:33.588710  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:33.588766  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:33.607229  687772 logs.go:282] 0 containers: []
	W1223 00:03:33.607251  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:33.607302  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:33.625442  687772 logs.go:282] 0 containers: []
	W1223 00:03:33.625466  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:33.625527  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:33.644308  687772 logs.go:282] 0 containers: []
	W1223 00:03:33.644340  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:33.644396  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:33.662684  687772 logs.go:282] 0 containers: []
	W1223 00:03:33.662717  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:33.662786  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:33.681135  687772 logs.go:282] 0 containers: []
	W1223 00:03:33.681161  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:33.681209  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:33.700016  687772 logs.go:282] 0 containers: []
	W1223 00:03:33.700042  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:33.700057  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:33.700070  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:33.718957  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:33.718985  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:33.747390  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:33.747417  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:33.793693  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:33.793722  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:33.815051  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:33.815076  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:33.869709  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:33.862833    9773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:33.863339    9773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:33.864945    9773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:33.865374    9773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:33.866900    9773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:33.862833    9773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:33.863339    9773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:33.864945    9773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:33.865374    9773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:33.866900    9773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:36.371365  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:36.383229  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:36.403744  687772 logs.go:282] 0 containers: []
	W1223 00:03:36.403771  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:36.403818  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:36.422087  687772 logs.go:282] 0 containers: []
	W1223 00:03:36.422109  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:36.422163  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:36.440967  687772 logs.go:282] 0 containers: []
	W1223 00:03:36.440989  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:36.441046  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:36.459110  687772 logs.go:282] 0 containers: []
	W1223 00:03:36.459137  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:36.459184  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:36.477754  687772 logs.go:282] 0 containers: []
	W1223 00:03:36.477781  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:36.477838  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:36.496775  687772 logs.go:282] 0 containers: []
	W1223 00:03:36.496803  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:36.496857  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:36.516542  687772 logs.go:282] 0 containers: []
	W1223 00:03:36.516577  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:36.516652  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:36.537692  687772 logs.go:282] 0 containers: []
	W1223 00:03:36.537720  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:36.537731  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:36.537744  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:36.585346  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:36.585376  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:36.605519  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:36.605545  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:36.660230  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:36.653151    9925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:36.653663    9925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:36.655147    9925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:36.655634    9925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:36.657120    9925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:36.653151    9925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:36.653663    9925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:36.655147    9925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:36.655634    9925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:36.657120    9925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:36.660253  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:36.660269  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:36.678368  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:36.678395  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:39.206672  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:39.218123  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:39.236299  687772 logs.go:282] 0 containers: []
	W1223 00:03:39.236322  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:39.236384  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:39.256168  687772 logs.go:282] 0 containers: []
	W1223 00:03:39.256194  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:39.256256  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:39.278907  687772 logs.go:282] 0 containers: []
	W1223 00:03:39.278934  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:39.278987  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:39.299685  687772 logs.go:282] 0 containers: []
	W1223 00:03:39.299712  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:39.299771  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:39.319824  687772 logs.go:282] 0 containers: []
	W1223 00:03:39.319847  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:39.319890  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:39.339314  687772 logs.go:282] 0 containers: []
	W1223 00:03:39.339340  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:39.339388  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:39.357097  687772 logs.go:282] 0 containers: []
	W1223 00:03:39.357122  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:39.357178  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:39.375484  687772 logs.go:282] 0 containers: []
	W1223 00:03:39.375506  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:39.375518  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:39.375528  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:39.422143  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:39.422171  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:39.442163  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:39.442190  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:39.499251  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:39.491681   10081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:39.492132   10081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:39.493822   10081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:39.494268   10081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:39.495815   10081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:39.491681   10081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:39.492132   10081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:39.493822   10081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:39.494268   10081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:39.495815   10081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:39.499300  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:39.499313  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:39.520555  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:39.520585  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:42.050334  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:42.062329  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:42.081392  687772 logs.go:282] 0 containers: []
	W1223 00:03:42.081414  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:42.081466  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:42.100032  687772 logs.go:282] 0 containers: []
	W1223 00:03:42.100060  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:42.100108  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:42.118667  687772 logs.go:282] 0 containers: []
	W1223 00:03:42.118701  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:42.118755  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:42.137260  687772 logs.go:282] 0 containers: []
	W1223 00:03:42.137280  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:42.137324  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:42.156202  687772 logs.go:282] 0 containers: []
	W1223 00:03:42.156223  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:42.156268  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:42.173781  687772 logs.go:282] 0 containers: []
	W1223 00:03:42.173805  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:42.173849  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:42.191802  687772 logs.go:282] 0 containers: []
	W1223 00:03:42.191823  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:42.191865  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:42.210403  687772 logs.go:282] 0 containers: []
	W1223 00:03:42.210428  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:42.210439  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:42.210451  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:42.257288  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:42.257324  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:42.279921  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:42.279950  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:42.335965  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:42.328933   10249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:42.329474   10249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:42.331040   10249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:42.331482   10249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:42.333076   10249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:42.328933   10249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:42.329474   10249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:42.331040   10249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:42.331482   10249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:42.333076   10249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:42.335989  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:42.336007  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:42.354691  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:42.354717  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:44.883238  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:44.894443  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:44.913117  687772 logs.go:282] 0 containers: []
	W1223 00:03:44.913141  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:44.913198  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:44.931401  687772 logs.go:282] 0 containers: []
	W1223 00:03:44.931426  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:44.931481  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:44.950195  687772 logs.go:282] 0 containers: []
	W1223 00:03:44.950223  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:44.950276  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:44.968485  687772 logs.go:282] 0 containers: []
	W1223 00:03:44.968511  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:44.968566  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:44.987148  687772 logs.go:282] 0 containers: []
	W1223 00:03:44.987171  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:44.987233  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:45.005624  687772 logs.go:282] 0 containers: []
	W1223 00:03:45.005646  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:45.005693  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:45.023699  687772 logs.go:282] 0 containers: []
	W1223 00:03:45.023724  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:45.023791  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:45.042874  687772 logs.go:282] 0 containers: []
	W1223 00:03:45.042892  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:45.042903  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:45.042913  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:45.091063  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:45.091090  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:45.111078  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:45.111104  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:45.165637  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:45.158773   10418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:45.159306   10418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:45.160855   10418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:45.161311   10418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:45.162829   10418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:45.158773   10418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:45.159306   10418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:45.160855   10418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:45.161311   10418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:45.162829   10418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:45.165664  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:45.165680  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:45.183805  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:45.183831  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:47.712691  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:47.724393  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:47.743118  687772 logs.go:282] 0 containers: []
	W1223 00:03:47.743145  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:47.743192  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:47.764020  687772 logs.go:282] 0 containers: []
	W1223 00:03:47.764047  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:47.764100  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:47.784950  687772 logs.go:282] 0 containers: []
	W1223 00:03:47.784979  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:47.785031  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:47.805130  687772 logs.go:282] 0 containers: []
	W1223 00:03:47.805153  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:47.805202  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:47.824818  687772 logs.go:282] 0 containers: []
	W1223 00:03:47.824840  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:47.824881  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:47.842122  687772 logs.go:282] 0 containers: []
	W1223 00:03:47.842142  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:47.842182  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:47.860107  687772 logs.go:282] 0 containers: []
	W1223 00:03:47.860126  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:47.860169  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:47.877957  687772 logs.go:282] 0 containers: []
	W1223 00:03:47.877981  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:47.877991  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:47.878003  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:47.913554  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:47.913583  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:47.959272  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:47.959301  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:47.979197  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:47.979224  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:48.034846  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:48.027499   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:48.028033   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:48.029566   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:48.030004   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:48.031523   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:48.027499   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:48.028033   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:48.029566   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:48.030004   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:48.031523   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:48.034864  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:48.034876  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:50.554653  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:50.565766  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:50.584506  687772 logs.go:282] 0 containers: []
	W1223 00:03:50.584527  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:50.584568  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:50.603087  687772 logs.go:282] 0 containers: []
	W1223 00:03:50.603112  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:50.603159  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:50.621694  687772 logs.go:282] 0 containers: []
	W1223 00:03:50.621718  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:50.621758  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:50.640855  687772 logs.go:282] 0 containers: []
	W1223 00:03:50.640882  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:50.640950  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:50.658573  687772 logs.go:282] 0 containers: []
	W1223 00:03:50.658615  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:50.658659  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:50.676703  687772 logs.go:282] 0 containers: []
	W1223 00:03:50.676725  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:50.676792  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:50.694997  687772 logs.go:282] 0 containers: []
	W1223 00:03:50.695020  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:50.695084  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:50.711361  687772 logs.go:282] 0 containers: []
	W1223 00:03:50.711382  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:50.711393  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:50.711405  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:50.739475  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:50.739500  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:50.789788  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:50.789828  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:50.810067  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:50.810096  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:50.864855  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:50.857771   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:50.858239   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:50.859923   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:50.860349   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:50.861907   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:50.857771   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:50.858239   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:50.859923   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:50.860349   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:50.861907   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:50.864881  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:50.864896  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:53.383457  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:53.394757  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:53.414248  687772 logs.go:282] 0 containers: []
	W1223 00:03:53.414277  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:53.414341  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:53.432950  687772 logs.go:282] 0 containers: []
	W1223 00:03:53.432970  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:53.433020  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:53.452058  687772 logs.go:282] 0 containers: []
	W1223 00:03:53.452081  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:53.452143  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:53.470670  687772 logs.go:282] 0 containers: []
	W1223 00:03:53.470698  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:53.470751  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:53.489416  687772 logs.go:282] 0 containers: []
	W1223 00:03:53.489443  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:53.489486  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:53.508963  687772 logs.go:282] 0 containers: []
	W1223 00:03:53.508995  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:53.509057  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:53.530683  687772 logs.go:282] 0 containers: []
	W1223 00:03:53.530710  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:53.530770  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:53.551545  687772 logs.go:282] 0 containers: []
	W1223 00:03:53.551577  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:53.551610  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:53.551627  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:53.570296  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:53.570324  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:53.598123  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:53.598154  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:53.646248  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:53.646280  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:53.666819  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:53.666844  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:53.722068  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:53.715109   10928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:53.715646   10928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:53.717149   10928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:53.717536   10928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:53.719006   10928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:53.715109   10928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:53.715646   10928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:53.717149   10928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:53.717536   10928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:53.719006   10928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:56.223706  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:56.235187  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:56.255491  687772 logs.go:282] 0 containers: []
	W1223 00:03:56.255511  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:56.255551  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:56.274455  687772 logs.go:282] 0 containers: []
	W1223 00:03:56.274479  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:56.274519  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:56.293621  687772 logs.go:282] 0 containers: []
	W1223 00:03:56.293648  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:56.293702  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:56.312485  687772 logs.go:282] 0 containers: []
	W1223 00:03:56.312511  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:56.312558  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:56.331239  687772 logs.go:282] 0 containers: []
	W1223 00:03:56.331266  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:56.331320  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:56.349793  687772 logs.go:282] 0 containers: []
	W1223 00:03:56.349813  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:56.349856  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:56.368378  687772 logs.go:282] 0 containers: []
	W1223 00:03:56.368397  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:56.368446  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:56.386706  687772 logs.go:282] 0 containers: []
	W1223 00:03:56.386730  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:56.386744  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:56.386759  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:56.435036  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:56.435067  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:56.456766  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:56.456793  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:56.515022  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:56.506534   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:56.507203   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:56.508885   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:56.509323   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:56.510920   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:56.506534   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:56.507203   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:56.508885   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:56.509323   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:56.510920   11082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:56.515044  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:56.515056  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:03:56.537382  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:56.537424  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:59.067413  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:03:59.078926  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:03:59.098458  687772 logs.go:282] 0 containers: []
	W1223 00:03:59.098490  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:03:59.098543  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:03:59.119074  687772 logs.go:282] 0 containers: []
	W1223 00:03:59.119100  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:03:59.119146  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:03:59.138014  687772 logs.go:282] 0 containers: []
	W1223 00:03:59.138036  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:03:59.138082  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:03:59.157367  687772 logs.go:282] 0 containers: []
	W1223 00:03:59.157390  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:03:59.157433  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:03:59.175923  687772 logs.go:282] 0 containers: []
	W1223 00:03:59.175950  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:03:59.176008  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:03:59.194211  687772 logs.go:282] 0 containers: []
	W1223 00:03:59.194243  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:03:59.194295  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:03:59.212980  687772 logs.go:282] 0 containers: []
	W1223 00:03:59.213004  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:03:59.213050  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:03:59.231233  687772 logs.go:282] 0 containers: []
	W1223 00:03:59.231255  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:03:59.231266  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:03:59.231277  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:03:59.260354  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:03:59.260377  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:03:59.307751  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:03:59.307784  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:03:59.327756  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:03:59.327782  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:03:59.382873  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:03:59.375811   11264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:59.376331   11264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:59.377895   11264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:59.378317   11264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:59.379902   11264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:03:59.375811   11264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:59.376331   11264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:59.377895   11264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:59.378317   11264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:03:59.379902   11264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:03:59.382895  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:03:59.382908  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:01.903304  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:01.914514  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:01.933300  687772 logs.go:282] 0 containers: []
	W1223 00:04:01.933328  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:01.933388  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:01.952153  687772 logs.go:282] 0 containers: []
	W1223 00:04:01.952181  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:01.952225  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:01.970903  687772 logs.go:282] 0 containers: []
	W1223 00:04:01.970933  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:01.970987  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:01.989493  687772 logs.go:282] 0 containers: []
	W1223 00:04:01.989513  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:01.989567  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:02.009114  687772 logs.go:282] 0 containers: []
	W1223 00:04:02.009141  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:02.009198  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:02.030277  687772 logs.go:282] 0 containers: []
	W1223 00:04:02.030310  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:02.030365  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:02.050466  687772 logs.go:282] 0 containers: []
	W1223 00:04:02.050492  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:02.050551  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:02.069917  687772 logs.go:282] 0 containers: []
	W1223 00:04:02.069941  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:02.069956  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:02.069970  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:02.115721  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:02.115750  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:02.135348  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:02.135373  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:02.190691  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:02.183688   11415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:02.184205   11415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:02.185799   11415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:02.186209   11415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:02.187682   11415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:02.183688   11415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:02.184205   11415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:02.185799   11415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:02.186209   11415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:02.187682   11415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:02.190712  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:02.190724  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:02.209097  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:02.209122  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:04.737357  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:04.748553  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:04.770341  687772 logs.go:282] 0 containers: []
	W1223 00:04:04.770369  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:04.770424  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:04.791137  687772 logs.go:282] 0 containers: []
	W1223 00:04:04.791165  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:04.791214  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:04.810520  687772 logs.go:282] 0 containers: []
	W1223 00:04:04.810541  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:04.810607  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:04.828972  687772 logs.go:282] 0 containers: []
	W1223 00:04:04.829000  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:04.829055  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:04.849074  687772 logs.go:282] 0 containers: []
	W1223 00:04:04.849096  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:04.849148  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:04.868041  687772 logs.go:282] 0 containers: []
	W1223 00:04:04.868063  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:04.868115  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:04.886481  687772 logs.go:282] 0 containers: []
	W1223 00:04:04.886504  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:04.886567  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:04.905235  687772 logs.go:282] 0 containers: []
	W1223 00:04:04.905262  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:04.905274  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:04.905285  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:04.953851  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:04.953880  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:04.973781  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:04.973806  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:05.031345  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:05.024020   11570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:05.024585   11570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:05.026291   11570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:05.026768   11570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:05.028137   11570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:05.024020   11570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:05.024585   11570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:05.026291   11570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:05.026768   11570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:05.028137   11570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:05.031368  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:05.031383  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:05.050812  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:05.050839  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:07.580204  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:07.592091  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:07.611238  687772 logs.go:282] 0 containers: []
	W1223 00:04:07.611267  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:07.611318  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:07.630713  687772 logs.go:282] 0 containers: []
	W1223 00:04:07.630736  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:07.630786  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:07.649511  687772 logs.go:282] 0 containers: []
	W1223 00:04:07.649541  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:07.649620  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:07.668236  687772 logs.go:282] 0 containers: []
	W1223 00:04:07.668264  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:07.668323  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:07.687077  687772 logs.go:282] 0 containers: []
	W1223 00:04:07.687101  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:07.687158  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:07.705952  687772 logs.go:282] 0 containers: []
	W1223 00:04:07.705982  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:07.706036  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:07.725156  687772 logs.go:282] 0 containers: []
	W1223 00:04:07.725178  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:07.725224  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:07.744024  687772 logs.go:282] 0 containers: []
	W1223 00:04:07.744049  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:07.744063  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:07.744079  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:07.797680  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:07.797721  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:07.819453  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:07.819481  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:07.875026  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:07.867909   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:07.868453   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:07.870037   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:07.870465   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:07.872022   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:07.867909   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:07.868453   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:07.870037   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:07.870465   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:07.872022   11740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:07.875046  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:07.875059  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:07.893942  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:07.893968  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:10.422234  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:10.433749  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:10.453027  687772 logs.go:282] 0 containers: []
	W1223 00:04:10.453049  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:10.453099  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:10.471766  687772 logs.go:282] 0 containers: []
	W1223 00:04:10.471789  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:10.471840  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:10.489960  687772 logs.go:282] 0 containers: []
	W1223 00:04:10.489981  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:10.490025  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:10.508537  687772 logs.go:282] 0 containers: []
	W1223 00:04:10.508558  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:10.508614  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:10.527336  687772 logs.go:282] 0 containers: []
	W1223 00:04:10.527362  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:10.527418  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:10.545995  687772 logs.go:282] 0 containers: []
	W1223 00:04:10.546019  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:10.546074  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:10.564167  687772 logs.go:282] 0 containers: []
	W1223 00:04:10.564196  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:10.564254  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:10.582919  687772 logs.go:282] 0 containers: []
	W1223 00:04:10.582947  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:10.582961  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:10.582974  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:10.630969  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:10.631004  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:10.651161  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:10.651197  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:10.709000  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:10.701750   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:10.702399   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:10.703967   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:10.704377   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:10.705928   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:10.701750   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:10.702399   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:10.703967   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:10.704377   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:10.705928   11906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:10.709026  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:10.709041  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:10.728175  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:10.728203  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:13.258812  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:13.271437  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:13.293437  687772 logs.go:282] 0 containers: []
	W1223 00:04:13.293468  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:13.293525  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:13.313483  687772 logs.go:282] 0 containers: []
	W1223 00:04:13.313508  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:13.313568  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:13.333612  687772 logs.go:282] 0 containers: []
	W1223 00:04:13.333643  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:13.333709  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:13.353086  687772 logs.go:282] 0 containers: []
	W1223 00:04:13.353111  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:13.353169  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:13.372208  687772 logs.go:282] 0 containers: []
	W1223 00:04:13.372230  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:13.372275  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:13.391431  687772 logs.go:282] 0 containers: []
	W1223 00:04:13.391457  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:13.391507  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:13.410402  687772 logs.go:282] 0 containers: []
	W1223 00:04:13.410434  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:13.410502  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:13.428653  687772 logs.go:282] 0 containers: []
	W1223 00:04:13.428675  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:13.428687  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:13.428709  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:13.474690  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:13.474729  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:13.495426  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:13.495457  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:13.550790  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:13.543422   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:13.544009   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:13.545544   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:13.546130   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:13.547692   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:13.543422   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:13.544009   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:13.545544   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:13.546130   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:13.547692   12074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:13.550810  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:13.550822  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:13.569370  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:13.569397  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:16.099133  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:16.110484  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:16.129712  687772 logs.go:282] 0 containers: []
	W1223 00:04:16.129743  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:16.129808  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:16.147785  687772 logs.go:282] 0 containers: []
	W1223 00:04:16.147808  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:16.147854  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:16.167259  687772 logs.go:282] 0 containers: []
	W1223 00:04:16.167284  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:16.167333  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:16.186151  687772 logs.go:282] 0 containers: []
	W1223 00:04:16.186178  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:16.186223  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:16.206074  687772 logs.go:282] 0 containers: []
	W1223 00:04:16.206099  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:16.206154  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:16.225296  687772 logs.go:282] 0 containers: []
	W1223 00:04:16.225319  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:16.225369  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:16.244091  687772 logs.go:282] 0 containers: []
	W1223 00:04:16.244115  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:16.244160  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:16.263620  687772 logs.go:282] 0 containers: []
	W1223 00:04:16.263643  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:16.263655  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:16.263667  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:16.323241  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:16.316239   12235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:16.316726   12235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:16.318256   12235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:16.318676   12235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:16.319891   12235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:16.316239   12235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:16.316726   12235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:16.318256   12235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:16.318676   12235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:16.319891   12235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:16.323265  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:16.323281  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:16.342320  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:16.342346  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:16.371156  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:16.371183  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:16.421158  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:16.421188  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:18.942795  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:18.954257  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:18.974190  687772 logs.go:282] 0 containers: []
	W1223 00:04:18.974217  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:18.974270  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:18.993178  687772 logs.go:282] 0 containers: []
	W1223 00:04:18.993200  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:18.993245  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:19.013377  687772 logs.go:282] 0 containers: []
	W1223 00:04:19.013405  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:19.013465  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:19.034917  687772 logs.go:282] 0 containers: []
	W1223 00:04:19.034941  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:19.034990  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:19.054247  687772 logs.go:282] 0 containers: []
	W1223 00:04:19.054271  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:19.054326  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:19.072206  687772 logs.go:282] 0 containers: []
	W1223 00:04:19.072235  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:19.072297  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:19.091855  687772 logs.go:282] 0 containers: []
	W1223 00:04:19.091882  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:19.091933  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:19.111067  687772 logs.go:282] 0 containers: []
	W1223 00:04:19.111100  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:19.111114  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:19.111127  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:19.161923  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:19.161955  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:19.182679  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:19.182708  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:19.239475  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:19.232458   12393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:19.233037   12393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:19.234582   12393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:19.234997   12393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:19.236569   12393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:19.232458   12393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:19.233037   12393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:19.234582   12393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:19.234997   12393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:19.236569   12393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:19.239503  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:19.239521  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:19.259046  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:19.259075  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:21.799246  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:21.810742  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:21.830826  687772 logs.go:282] 0 containers: []
	W1223 00:04:21.830852  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:21.830896  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:21.849427  687772 logs.go:282] 0 containers: []
	W1223 00:04:21.849455  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:21.849501  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:21.867823  687772 logs.go:282] 0 containers: []
	W1223 00:04:21.867847  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:21.867891  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:21.886431  687772 logs.go:282] 0 containers: []
	W1223 00:04:21.886452  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:21.886508  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:21.905079  687772 logs.go:282] 0 containers: []
	W1223 00:04:21.905103  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:21.905160  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:21.923344  687772 logs.go:282] 0 containers: []
	W1223 00:04:21.923365  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:21.923407  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:21.941945  687772 logs.go:282] 0 containers: []
	W1223 00:04:21.941966  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:21.942012  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:21.959749  687772 logs.go:282] 0 containers: []
	W1223 00:04:21.959773  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:21.959785  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:21.959795  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:21.979750  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:21.979776  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:22.008278  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:22.008301  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:22.059988  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:22.060022  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:22.080174  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:22.080201  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:22.135625  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:22.128550   12578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:22.129064   12578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:22.130551   12578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:22.130965   12578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:22.132436   12578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:22.128550   12578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:22.129064   12578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:22.130551   12578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:22.130965   12578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:22.132436   12578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:24.636526  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:24.647769  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:24.666800  687772 logs.go:282] 0 containers: []
	W1223 00:04:24.666823  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:24.666873  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:24.685078  687772 logs.go:282] 0 containers: []
	W1223 00:04:24.685100  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:24.685153  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:24.703219  687772 logs.go:282] 0 containers: []
	W1223 00:04:24.703238  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:24.703287  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:24.721619  687772 logs.go:282] 0 containers: []
	W1223 00:04:24.721647  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:24.721705  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:24.740548  687772 logs.go:282] 0 containers: []
	W1223 00:04:24.740570  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:24.740632  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:24.758544  687772 logs.go:282] 0 containers: []
	W1223 00:04:24.758568  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:24.758633  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:24.776285  687772 logs.go:282] 0 containers: []
	W1223 00:04:24.776317  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:24.776445  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:24.794360  687772 logs.go:282] 0 containers: []
	W1223 00:04:24.794386  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:24.794399  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:24.794413  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:24.840111  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:24.840142  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:24.860260  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:24.860286  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:24.915702  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:24.908230   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:24.908821   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:24.910346   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:24.910801   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:24.912322   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:24.908230   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:24.908821   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:24.910346   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:24.910801   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:24.912322   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:24.915723  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:24.915736  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:24.934368  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:24.934394  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:27.463653  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:27.474997  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:27.494098  687772 logs.go:282] 0 containers: []
	W1223 00:04:27.494127  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:27.494183  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:27.513771  687772 logs.go:282] 0 containers: []
	W1223 00:04:27.513799  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:27.513855  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:27.534688  687772 logs.go:282] 0 containers: []
	W1223 00:04:27.534720  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:27.534777  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:27.553043  687772 logs.go:282] 0 containers: []
	W1223 00:04:27.553065  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:27.553115  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:27.571979  687772 logs.go:282] 0 containers: []
	W1223 00:04:27.572005  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:27.572049  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:27.590357  687772 logs.go:282] 0 containers: []
	W1223 00:04:27.590376  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:27.590419  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:27.609465  687772 logs.go:282] 0 containers: []
	W1223 00:04:27.609490  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:27.609547  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:27.628214  687772 logs.go:282] 0 containers: []
	W1223 00:04:27.628238  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:27.628253  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:27.628267  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:27.646519  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:27.646545  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:27.674935  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:27.674958  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:27.721277  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:27.721306  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:27.741140  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:27.741165  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:27.796676  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:27.789709   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:27.790246   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:27.791825   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:27.792273   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:27.793752   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:27.789709   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:27.790246   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:27.791825   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:27.792273   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:27.793752   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:30.297779  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:30.308987  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:30.327806  687772 logs.go:282] 0 containers: []
	W1223 00:04:30.327827  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:30.327885  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:30.347142  687772 logs.go:282] 0 containers: []
	W1223 00:04:30.347165  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:30.347216  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:30.365629  687772 logs.go:282] 0 containers: []
	W1223 00:04:30.365656  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:30.365729  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:30.383470  687772 logs.go:282] 0 containers: []
	W1223 00:04:30.383496  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:30.383552  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:30.402127  687772 logs.go:282] 0 containers: []
	W1223 00:04:30.402152  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:30.402214  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:30.420681  687772 logs.go:282] 0 containers: []
	W1223 00:04:30.420706  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:30.420757  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:30.439453  687772 logs.go:282] 0 containers: []
	W1223 00:04:30.439475  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:30.439517  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:30.458669  687772 logs.go:282] 0 containers: []
	W1223 00:04:30.458691  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:30.458702  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:30.458713  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:30.505022  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:30.505050  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:30.528295  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:30.528323  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:30.585055  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:30.577823   13062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:30.578469   13062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:30.580017   13062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:30.580458   13062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:30.582038   13062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:30.577823   13062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:30.578469   13062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:30.580017   13062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:30.580458   13062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:30.582038   13062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:30.585076  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:30.585088  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:30.604200  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:30.604229  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:33.131779  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:33.143670  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:33.163179  687772 logs.go:282] 0 containers: []
	W1223 00:04:33.163200  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:33.163245  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:33.182970  687772 logs.go:282] 0 containers: []
	W1223 00:04:33.182992  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:33.183043  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:33.201569  687772 logs.go:282] 0 containers: []
	W1223 00:04:33.201609  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:33.201656  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:33.219907  687772 logs.go:282] 0 containers: []
	W1223 00:04:33.219931  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:33.219989  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:33.239604  687772 logs.go:282] 0 containers: []
	W1223 00:04:33.239630  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:33.239675  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:33.258182  687772 logs.go:282] 0 containers: []
	W1223 00:04:33.258211  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:33.258263  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:33.277606  687772 logs.go:282] 0 containers: []
	W1223 00:04:33.277632  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:33.277678  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:33.297258  687772 logs.go:282] 0 containers: []
	W1223 00:04:33.297283  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:33.297296  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:33.297312  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:33.344903  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:33.344932  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:33.364742  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:33.364768  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:33.420528  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:33.413527   13221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:33.414059   13221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:33.415546   13221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:33.416007   13221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:33.417495   13221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:33.413527   13221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:33.414059   13221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:33.415546   13221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:33.416007   13221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:33.417495   13221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:33.420549  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:33.420560  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:33.439384  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:33.439411  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:35.968903  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:35.980276  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:35.999444  687772 logs.go:282] 0 containers: []
	W1223 00:04:35.999474  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:35.999534  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:36.018792  687772 logs.go:282] 0 containers: []
	W1223 00:04:36.018819  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:36.018880  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:36.036956  687772 logs.go:282] 0 containers: []
	W1223 00:04:36.036985  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:36.037043  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:36.055239  687772 logs.go:282] 0 containers: []
	W1223 00:04:36.055265  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:36.055315  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:36.073241  687772 logs.go:282] 0 containers: []
	W1223 00:04:36.073272  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:36.073325  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:36.091575  687772 logs.go:282] 0 containers: []
	W1223 00:04:36.091613  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:36.091662  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:36.110369  687772 logs.go:282] 0 containers: []
	W1223 00:04:36.110396  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:36.110448  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:36.128481  687772 logs.go:282] 0 containers: []
	W1223 00:04:36.128505  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:36.128516  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:36.128526  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:36.176492  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:36.176526  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:36.196649  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:36.196675  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:36.253201  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:36.245327   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:36.245908   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:36.247446   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:36.247880   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:36.249674   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:36.245327   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:36.245908   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:36.247446   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:36.247880   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:36.249674   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:36.253224  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:36.253241  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:36.273351  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:36.273379  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:38.804411  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:38.815899  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:38.834644  687772 logs.go:282] 0 containers: []
	W1223 00:04:38.834668  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:38.834713  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:38.853892  687772 logs.go:282] 0 containers: []
	W1223 00:04:38.853919  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:38.853967  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:38.871484  687772 logs.go:282] 0 containers: []
	W1223 00:04:38.871505  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:38.871554  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:38.889803  687772 logs.go:282] 0 containers: []
	W1223 00:04:38.889828  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:38.889879  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:38.909558  687772 logs.go:282] 0 containers: []
	W1223 00:04:38.909586  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:38.909652  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:38.929528  687772 logs.go:282] 0 containers: []
	W1223 00:04:38.929553  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:38.929624  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:38.948153  687772 logs.go:282] 0 containers: []
	W1223 00:04:38.948181  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:38.948241  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:38.966657  687772 logs.go:282] 0 containers: []
	W1223 00:04:38.966679  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:38.966689  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:38.966711  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:38.994610  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:38.994637  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:39.040694  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:39.040722  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:39.060391  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:39.060417  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:39.116169  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:39.108908   13569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:39.109405   13569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:39.111037   13569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:39.111517   13569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:39.113022   13569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:39.108908   13569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:39.109405   13569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:39.111037   13569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:39.111517   13569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:39.113022   13569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:39.116189  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:39.116201  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:41.638009  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:41.650427  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:41.670214  687772 logs.go:282] 0 containers: []
	W1223 00:04:41.670241  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:41.670289  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:41.689539  687772 logs.go:282] 0 containers: []
	W1223 00:04:41.689568  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:41.689651  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:41.708449  687772 logs.go:282] 0 containers: []
	W1223 00:04:41.708472  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:41.708520  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:41.727897  687772 logs.go:282] 0 containers: []
	W1223 00:04:41.727918  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:41.727963  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:41.748169  687772 logs.go:282] 0 containers: []
	W1223 00:04:41.748200  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:41.748252  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:41.767148  687772 logs.go:282] 0 containers: []
	W1223 00:04:41.767172  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:41.767224  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:41.789562  687772 logs.go:282] 0 containers: []
	W1223 00:04:41.789589  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:41.789665  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:41.808259  687772 logs.go:282] 0 containers: []
	W1223 00:04:41.808281  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:41.808292  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:41.808304  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:41.827093  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:41.827120  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:41.854644  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:41.854671  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:41.901960  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:41.901995  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:41.921983  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:41.922011  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:41.978723  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:41.971457   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:41.971976   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:41.973486   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:41.973968   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:41.975679   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:41.971457   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:41.971976   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:41.973486   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:41.973968   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:41.975679   13741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:44.479583  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:44.491055  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:44.513749  687772 logs.go:282] 0 containers: []
	W1223 00:04:44.513779  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:44.513836  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:44.535619  687772 logs.go:282] 0 containers: []
	W1223 00:04:44.535648  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:44.535722  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:44.555441  687772 logs.go:282] 0 containers: []
	W1223 00:04:44.555464  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:44.555512  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:44.574828  687772 logs.go:282] 0 containers: []
	W1223 00:04:44.574851  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:44.574895  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:44.593270  687772 logs.go:282] 0 containers: []
	W1223 00:04:44.593293  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:44.593350  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:44.612157  687772 logs.go:282] 0 containers: []
	W1223 00:04:44.612182  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:44.612239  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:44.630342  687772 logs.go:282] 0 containers: []
	W1223 00:04:44.630366  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:44.630417  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:44.648864  687772 logs.go:282] 0 containers: []
	W1223 00:04:44.648893  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:44.648905  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:44.648917  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:44.698462  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:44.698494  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:44.718432  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:44.718463  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:44.777738  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:44.767938   13879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:44.768520   13879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:44.770129   13879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:44.770658   13879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:44.773585   13879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:44.767938   13879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:44.768520   13879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:44.770129   13879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:44.770658   13879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:44.773585   13879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:44.777764  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:44.777781  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:44.798488  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:44.798522  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:47.328787  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:47.340091  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:47.359764  687772 logs.go:282] 0 containers: []
	W1223 00:04:47.359786  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:47.359834  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:47.378531  687772 logs.go:282] 0 containers: []
	W1223 00:04:47.378557  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:47.378633  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:47.397279  687772 logs.go:282] 0 containers: []
	W1223 00:04:47.397303  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:47.397351  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:47.415379  687772 logs.go:282] 0 containers: []
	W1223 00:04:47.415404  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:47.415449  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:47.433342  687772 logs.go:282] 0 containers: []
	W1223 00:04:47.433363  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:47.433407  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:47.452134  687772 logs.go:282] 0 containers: []
	W1223 00:04:47.452153  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:47.452195  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:47.470492  687772 logs.go:282] 0 containers: []
	W1223 00:04:47.470514  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:47.470565  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:47.489435  687772 logs.go:282] 0 containers: []
	W1223 00:04:47.489462  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:47.489475  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:47.489490  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:47.543310  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:47.543341  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:47.563678  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:47.563716  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:47.618877  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:47.611492   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:47.612043   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:47.613658   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:47.614136   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:47.615686   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:47.611492   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:47.612043   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:47.613658   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:47.614136   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:47.615686   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:47.618902  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:47.618916  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:47.637117  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:47.637142  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:50.165288  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:50.176485  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:50.195504  687772 logs.go:282] 0 containers: []
	W1223 00:04:50.195530  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:50.195573  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:50.214411  687772 logs.go:282] 0 containers: []
	W1223 00:04:50.214435  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:50.214486  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:50.232050  687772 logs.go:282] 0 containers: []
	W1223 00:04:50.232073  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:50.232113  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:50.249723  687772 logs.go:282] 0 containers: []
	W1223 00:04:50.249747  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:50.249805  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:50.269197  687772 logs.go:282] 0 containers: []
	W1223 00:04:50.269220  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:50.269262  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:50.287018  687772 logs.go:282] 0 containers: []
	W1223 00:04:50.287042  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:50.287084  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:50.304852  687772 logs.go:282] 0 containers: []
	W1223 00:04:50.304876  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:50.304923  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:50.323126  687772 logs.go:282] 0 containers: []
	W1223 00:04:50.323150  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:50.323164  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:50.323177  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:50.371303  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:50.371328  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:50.391396  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:50.391419  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:50.446479  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:50.439351   14214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:50.440054   14214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:50.441636   14214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:50.442091   14214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:50.443655   14214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:50.439351   14214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:50.440054   14214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:50.441636   14214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:50.442091   14214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:50.443655   14214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:50.446503  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:50.446519  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:50.466869  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:50.466895  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:53.004783  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:53.016488  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:53.037102  687772 logs.go:282] 0 containers: []
	W1223 00:04:53.037130  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:53.037175  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:53.056487  687772 logs.go:282] 0 containers: []
	W1223 00:04:53.056509  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:53.056551  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:53.074919  687772 logs.go:282] 0 containers: []
	W1223 00:04:53.074938  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:53.074983  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:53.093142  687772 logs.go:282] 0 containers: []
	W1223 00:04:53.093163  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:53.093203  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:53.112007  687772 logs.go:282] 0 containers: []
	W1223 00:04:53.112030  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:53.112079  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:53.130737  687772 logs.go:282] 0 containers: []
	W1223 00:04:53.130759  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:53.130802  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:53.149980  687772 logs.go:282] 0 containers: []
	W1223 00:04:53.150009  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:53.150057  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:53.167468  687772 logs.go:282] 0 containers: []
	W1223 00:04:53.167493  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:53.167503  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:53.167513  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:53.195775  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:53.195800  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:53.243212  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:53.243238  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:53.263047  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:53.263073  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:53.319009  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:53.311761   14403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:53.312309   14403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:53.313970   14403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:53.314449   14403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:53.315934   14403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:53.311761   14403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:53.312309   14403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:53.313970   14403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:53.314449   14403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:53.315934   14403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:53.319029  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:53.319041  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:55.838963  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:55.850169  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:55.868811  687772 logs.go:282] 0 containers: []
	W1223 00:04:55.868833  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:55.868878  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:55.887281  687772 logs.go:282] 0 containers: []
	W1223 00:04:55.887309  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:55.887361  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:55.905343  687772 logs.go:282] 0 containers: []
	W1223 00:04:55.905372  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:55.905425  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:55.922787  687772 logs.go:282] 0 containers: []
	W1223 00:04:55.922811  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:55.922858  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:55.941063  687772 logs.go:282] 0 containers: []
	W1223 00:04:55.941090  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:55.941143  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:55.960388  687772 logs.go:282] 0 containers: []
	W1223 00:04:55.960413  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:55.960549  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:55.978787  687772 logs.go:282] 0 containers: []
	W1223 00:04:55.978810  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:55.978854  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:55.996489  687772 logs.go:282] 0 containers: []
	W1223 00:04:55.996516  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:55.996530  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:55.996542  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:56.048197  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:56.048229  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:56.068640  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:56.068668  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:56.124436  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:56.117357   14557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:56.117926   14557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:56.119480   14557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:56.119940   14557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:56.121489   14557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:56.117357   14557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:56.117926   14557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:56.119480   14557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:56.119940   14557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:56.121489   14557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:04:56.124461  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:56.124478  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:56.143079  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:56.143102  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:58.672032  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:04:58.683539  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:04:58.702739  687772 logs.go:282] 0 containers: []
	W1223 00:04:58.702762  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:04:58.702814  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:04:58.721434  687772 logs.go:282] 0 containers: []
	W1223 00:04:58.721465  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:04:58.721514  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:04:58.741740  687772 logs.go:282] 0 containers: []
	W1223 00:04:58.741768  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:04:58.741811  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:04:58.760960  687772 logs.go:282] 0 containers: []
	W1223 00:04:58.760982  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:04:58.761035  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:04:58.780979  687772 logs.go:282] 0 containers: []
	W1223 00:04:58.781001  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:04:58.781045  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:04:58.799417  687772 logs.go:282] 0 containers: []
	W1223 00:04:58.799453  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:04:58.799501  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:04:58.817985  687772 logs.go:282] 0 containers: []
	W1223 00:04:58.818007  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:04:58.818051  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:04:58.837633  687772 logs.go:282] 0 containers: []
	W1223 00:04:58.837659  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:04:58.837671  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:04:58.837683  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:04:58.856421  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:04:58.856448  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:04:58.883550  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:04:58.883574  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:04:58.932130  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:04:58.932158  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:04:58.953160  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:04:58.953189  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:04:59.009951  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:04:59.002105   14731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:59.002736   14731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:59.004318   14731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:59.004809   14731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:59.006430   14731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:04:59.002105   14731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:59.002736   14731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:59.004318   14731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:59.004809   14731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:04:59.006430   14731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:01.512529  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:01.523921  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:01.542499  687772 logs.go:282] 0 containers: []
	W1223 00:05:01.542525  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:01.542569  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:01.560824  687772 logs.go:282] 0 containers: []
	W1223 00:05:01.560850  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:01.560892  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:01.578994  687772 logs.go:282] 0 containers: []
	W1223 00:05:01.579017  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:01.579060  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:01.597267  687772 logs.go:282] 0 containers: []
	W1223 00:05:01.597293  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:01.597346  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:01.615860  687772 logs.go:282] 0 containers: []
	W1223 00:05:01.615880  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:01.615919  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:01.635022  687772 logs.go:282] 0 containers: []
	W1223 00:05:01.635045  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:01.635084  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:01.654257  687772 logs.go:282] 0 containers: []
	W1223 00:05:01.654282  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:01.654338  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:01.672470  687772 logs.go:282] 0 containers: []
	W1223 00:05:01.672492  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:01.672502  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:01.672513  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:01.720496  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:01.720525  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:01.740698  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:01.740724  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:01.800538  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:01.793437   14884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:01.794074   14884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:01.795170   14884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:01.795640   14884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:01.797188   14884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:01.793437   14884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:01.794074   14884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:01.795170   14884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:01.795640   14884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:01.797188   14884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:01.800562  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:01.800579  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:01.820265  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:01.820291  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:04.348938  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:04.360190  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:04.379095  687772 logs.go:282] 0 containers: []
	W1223 00:05:04.379124  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:04.379177  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:04.396991  687772 logs.go:282] 0 containers: []
	W1223 00:05:04.397012  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:04.397057  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:04.415658  687772 logs.go:282] 0 containers: []
	W1223 00:05:04.415682  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:04.415750  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:04.434023  687772 logs.go:282] 0 containers: []
	W1223 00:05:04.434049  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:04.434093  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:04.452721  687772 logs.go:282] 0 containers: []
	W1223 00:05:04.452744  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:04.452791  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:04.471221  687772 logs.go:282] 0 containers: []
	W1223 00:05:04.471247  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:04.471294  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:04.489656  687772 logs.go:282] 0 containers: []
	W1223 00:05:04.489685  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:04.489734  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:04.508637  687772 logs.go:282] 0 containers: []
	W1223 00:05:04.508669  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:04.508689  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:04.508702  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:04.526928  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:04.526953  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:04.553896  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:04.553923  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:04.602972  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:04.602999  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:04.622788  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:04.622812  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:04.678232  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:04.670559   15068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:04.671188   15068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:04.672874   15068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:04.673311   15068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:04.675032   15068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:04.670559   15068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:04.671188   15068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:04.672874   15068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:04.673311   15068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:04.675032   15068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:07.179923  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:07.191963  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:07.211239  687772 logs.go:282] 0 containers: []
	W1223 00:05:07.211263  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:07.211304  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:07.230281  687772 logs.go:282] 0 containers: []
	W1223 00:05:07.230302  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:07.230343  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:07.249365  687772 logs.go:282] 0 containers: []
	W1223 00:05:07.249391  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:07.249443  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:07.269410  687772 logs.go:282] 0 containers: []
	W1223 00:05:07.269431  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:07.269484  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:07.288681  687772 logs.go:282] 0 containers: []
	W1223 00:05:07.288711  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:07.288756  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:07.307722  687772 logs.go:282] 0 containers: []
	W1223 00:05:07.307742  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:07.307785  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:07.324479  687772 logs.go:282] 0 containers: []
	W1223 00:05:07.324503  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:07.324557  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:07.343010  687772 logs.go:282] 0 containers: []
	W1223 00:05:07.343030  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:07.343041  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:07.343056  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:07.370090  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:07.370116  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:07.416268  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:07.416294  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:07.436063  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:07.436088  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:07.492624  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:07.485207   15232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:07.485853   15232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:07.487473   15232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:07.488003   15232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:07.489566   15232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:07.485207   15232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:07.485853   15232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:07.487473   15232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:07.488003   15232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:07.489566   15232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:07.492650  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:07.492667  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:10.011735  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:10.025412  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:10.046816  687772 logs.go:282] 0 containers: []
	W1223 00:05:10.046848  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:10.046917  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:10.065664  687772 logs.go:282] 0 containers: []
	W1223 00:05:10.065693  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:10.065752  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:10.084486  687772 logs.go:282] 0 containers: []
	W1223 00:05:10.084512  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:10.084569  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:10.103489  687772 logs.go:282] 0 containers: []
	W1223 00:05:10.103510  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:10.103563  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:10.121383  687772 logs.go:282] 0 containers: []
	W1223 00:05:10.121413  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:10.121457  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:10.139817  687772 logs.go:282] 0 containers: []
	W1223 00:05:10.139840  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:10.139883  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:10.158123  687772 logs.go:282] 0 containers: []
	W1223 00:05:10.158142  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:10.158195  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:10.176690  687772 logs.go:282] 0 containers: []
	W1223 00:05:10.176714  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:10.176728  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:10.176743  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:10.221786  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:10.221818  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:10.241642  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:10.241670  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:10.306092  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:10.298846   15375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:10.299321   15375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:10.300928   15375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:10.301392   15375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:10.302935   15375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:10.298846   15375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:10.299321   15375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:10.300928   15375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:10.301392   15375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:10.302935   15375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:10.306110  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:10.306122  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:10.325227  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:10.325254  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:12.853199  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:12.864559  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:12.883528  687772 logs.go:282] 0 containers: []
	W1223 00:05:12.883553  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:12.883615  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:12.901914  687772 logs.go:282] 0 containers: []
	W1223 00:05:12.901946  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:12.902003  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:12.920676  687772 logs.go:282] 0 containers: []
	W1223 00:05:12.920703  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:12.920746  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:12.938812  687772 logs.go:282] 0 containers: []
	W1223 00:05:12.938840  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:12.938898  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:12.956564  687772 logs.go:282] 0 containers: []
	W1223 00:05:12.956588  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:12.956651  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:12.975030  687772 logs.go:282] 0 containers: []
	W1223 00:05:12.975056  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:12.975112  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:12.992748  687772 logs.go:282] 0 containers: []
	W1223 00:05:12.992770  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:12.992819  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:13.013710  687772 logs.go:282] 0 containers: []
	W1223 00:05:13.013733  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:13.013744  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:13.013756  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:13.044889  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:13.044920  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:13.090565  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:13.090611  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:13.110578  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:13.110614  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:13.166048  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:13.158806   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:13.159417   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:13.161011   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:13.161480   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:13.163044   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:13.158806   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:13.159417   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:13.161011   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:13.161480   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:13.163044   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:13.166066  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:13.166079  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:15.685941  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:15.697434  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:15.716560  687772 logs.go:282] 0 containers: []
	W1223 00:05:15.716607  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:15.716664  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:15.735775  687772 logs.go:282] 0 containers: []
	W1223 00:05:15.735799  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:15.735847  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:15.753974  687772 logs.go:282] 0 containers: []
	W1223 00:05:15.753996  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:15.754046  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:15.771763  687772 logs.go:282] 0 containers: []
	W1223 00:05:15.771788  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:15.771846  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:15.790222  687772 logs.go:282] 0 containers: []
	W1223 00:05:15.790249  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:15.790294  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:15.808671  687772 logs.go:282] 0 containers: []
	W1223 00:05:15.808691  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:15.808735  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:15.827295  687772 logs.go:282] 0 containers: []
	W1223 00:05:15.827324  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:15.827377  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:15.845637  687772 logs.go:282] 0 containers: []
	W1223 00:05:15.845658  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:15.845668  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:15.845679  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:15.892975  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:15.893004  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:15.912599  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:15.912626  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:15.967763  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:15.960925   15710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:15.961478   15710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:15.963006   15710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:15.963401   15710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:15.964895   15710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:15.960925   15710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:15.961478   15710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:15.963006   15710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:15.963401   15710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:15.964895   15710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:15.967788  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:15.967801  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:15.986603  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:15.986632  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:18.516732  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:18.529415  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:18.549048  687772 logs.go:282] 0 containers: []
	W1223 00:05:18.549069  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:18.549113  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:18.567672  687772 logs.go:282] 0 containers: []
	W1223 00:05:18.567705  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:18.567771  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:18.586513  687772 logs.go:282] 0 containers: []
	W1223 00:05:18.586538  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:18.586613  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:18.604518  687772 logs.go:282] 0 containers: []
	W1223 00:05:18.604538  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:18.604579  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:18.623446  687772 logs.go:282] 0 containers: []
	W1223 00:05:18.623467  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:18.623510  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:18.642213  687772 logs.go:282] 0 containers: []
	W1223 00:05:18.642230  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:18.642279  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:18.660501  687772 logs.go:282] 0 containers: []
	W1223 00:05:18.660521  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:18.660563  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:18.678846  687772 logs.go:282] 0 containers: []
	W1223 00:05:18.678869  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:18.678882  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:18.678893  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:18.727936  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:18.727965  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:18.749033  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:18.749059  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:18.804351  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:18.796992   15875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:18.797493   15875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:18.799074   15875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:18.799516   15875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:18.801045   15875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:18.796992   15875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:18.797493   15875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:18.799074   15875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:18.799516   15875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:18.801045   15875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:18.804386  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:18.804401  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:18.822650  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:18.822681  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:21.351938  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:21.363094  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:21.382091  687772 logs.go:282] 0 containers: []
	W1223 00:05:21.382123  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:21.382179  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:21.400790  687772 logs.go:282] 0 containers: []
	W1223 00:05:21.400813  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:21.400861  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:21.418989  687772 logs.go:282] 0 containers: []
	W1223 00:05:21.419014  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:21.419060  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:21.437814  687772 logs.go:282] 0 containers: []
	W1223 00:05:21.437839  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:21.437898  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:21.456967  687772 logs.go:282] 0 containers: []
	W1223 00:05:21.456991  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:21.457045  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:21.475541  687772 logs.go:282] 0 containers: []
	W1223 00:05:21.475566  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:21.475644  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:21.494493  687772 logs.go:282] 0 containers: []
	W1223 00:05:21.494518  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:21.494576  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:21.513952  687772 logs.go:282] 0 containers: []
	W1223 00:05:21.513979  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:21.513990  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:21.514001  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:21.563253  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:21.563283  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:21.583663  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:21.583693  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:21.638754  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:21.631703   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:21.632235   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:21.633835   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:21.634263   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:21.635800   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:21.631703   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:21.632235   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:21.633835   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:21.634263   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:21.635800   16045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:21.638774  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:21.638786  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:21.657674  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:21.657704  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:24.188905  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:24.200277  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:24.220108  687772 logs.go:282] 0 containers: []
	W1223 00:05:24.220133  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:24.220188  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:24.240286  687772 logs.go:282] 0 containers: []
	W1223 00:05:24.240307  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:24.240351  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:24.260644  687772 logs.go:282] 0 containers: []
	W1223 00:05:24.260670  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:24.260724  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:24.282918  687772 logs.go:282] 0 containers: []
	W1223 00:05:24.282943  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:24.282990  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:24.302929  687772 logs.go:282] 0 containers: []
	W1223 00:05:24.302956  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:24.303013  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:24.322124  687772 logs.go:282] 0 containers: []
	W1223 00:05:24.322145  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:24.322196  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:24.340965  687772 logs.go:282] 0 containers: []
	W1223 00:05:24.340993  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:24.341050  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:24.360121  687772 logs.go:282] 0 containers: []
	W1223 00:05:24.360148  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:24.360162  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:24.360177  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:24.406776  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:24.406809  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:24.428882  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:24.428909  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:24.484257  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:24.477184   16205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:24.477734   16205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:24.479261   16205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:24.479752   16205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:24.481241   16205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:24.477184   16205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:24.477734   16205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:24.479261   16205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:24.479752   16205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:24.481241   16205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:24.484286  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:24.484304  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:24.504724  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:24.504752  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:27.038561  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:27.050259  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:27.069265  687772 logs.go:282] 0 containers: []
	W1223 00:05:27.069288  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:27.069333  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:27.088081  687772 logs.go:282] 0 containers: []
	W1223 00:05:27.088108  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:27.088171  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:27.107172  687772 logs.go:282] 0 containers: []
	W1223 00:05:27.107198  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:27.107246  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:27.125773  687772 logs.go:282] 0 containers: []
	W1223 00:05:27.125804  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:27.125862  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:27.144259  687772 logs.go:282] 0 containers: []
	W1223 00:05:27.144282  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:27.144339  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:27.163197  687772 logs.go:282] 0 containers: []
	W1223 00:05:27.163217  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:27.163263  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:27.181942  687772 logs.go:282] 0 containers: []
	W1223 00:05:27.181971  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:27.182030  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:27.199936  687772 logs.go:282] 0 containers: []
	W1223 00:05:27.199964  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:27.199980  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:27.199996  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:27.218431  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:27.218456  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:27.246756  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:27.246783  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:27.297557  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:27.297603  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:27.318177  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:27.318205  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:27.374968  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:27.367760   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:27.368359   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:27.369972   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:27.370370   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:27.371924   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:27.367760   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:27.368359   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:27.369972   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:27.370370   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:27.371924   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:29.875712  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:29.887100  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:29.906809  687772 logs.go:282] 0 containers: []
	W1223 00:05:29.906834  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:29.906892  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:29.926388  687772 logs.go:282] 0 containers: []
	W1223 00:05:29.926414  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:29.926467  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:29.946220  687772 logs.go:282] 0 containers: []
	W1223 00:05:29.946248  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:29.946302  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:29.967102  687772 logs.go:282] 0 containers: []
	W1223 00:05:29.967131  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:29.967188  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:29.986540  687772 logs.go:282] 0 containers: []
	W1223 00:05:29.986564  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:29.986631  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:30.004809  687772 logs.go:282] 0 containers: []
	W1223 00:05:30.004835  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:30.004881  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:30.023625  687772 logs.go:282] 0 containers: []
	W1223 00:05:30.023655  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:30.023711  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:30.042067  687772 logs.go:282] 0 containers: []
	W1223 00:05:30.042089  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:30.042100  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:30.042120  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:30.061885  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:30.061913  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:30.090401  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:30.090432  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:30.138962  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:30.138993  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:30.159224  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:30.159250  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:30.216295  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:30.208699   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:30.209372   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:30.211074   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:30.211516   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:30.213098   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:30.208699   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:30.209372   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:30.211074   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:30.211516   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:30.213098   16555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:32.716974  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:32.728432  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:32.748217  687772 logs.go:282] 0 containers: []
	W1223 00:05:32.748245  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:32.748292  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:32.767866  687772 logs.go:282] 0 containers: []
	W1223 00:05:32.767887  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:32.767935  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:32.788690  687772 logs.go:282] 0 containers: []
	W1223 00:05:32.788723  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:32.788782  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:32.808366  687772 logs.go:282] 0 containers: []
	W1223 00:05:32.808397  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:32.808460  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:32.827631  687772 logs.go:282] 0 containers: []
	W1223 00:05:32.827655  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:32.827714  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:32.846429  687772 logs.go:282] 0 containers: []
	W1223 00:05:32.846456  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:32.846511  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:32.865177  687772 logs.go:282] 0 containers: []
	W1223 00:05:32.865202  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:32.865258  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:32.885235  687772 logs.go:282] 0 containers: []
	W1223 00:05:32.885258  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:32.885268  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:32.885280  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:32.905218  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:32.905245  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:32.960860  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:32.953652   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:32.954228   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:32.955802   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:32.956269   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:32.957894   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:32.953652   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:32.954228   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:32.955802   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:32.956269   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:32.957894   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:32.960885  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:32.960905  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:32.979917  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:32.979943  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:33.008187  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:33.008218  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:35.555359  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:35.566888  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:35.586562  687772 logs.go:282] 0 containers: []
	W1223 00:05:35.586588  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:35.586657  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:35.605495  687772 logs.go:282] 0 containers: []
	W1223 00:05:35.605522  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:35.605579  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:35.624671  687772 logs.go:282] 0 containers: []
	W1223 00:05:35.624700  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:35.624760  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:35.643198  687772 logs.go:282] 0 containers: []
	W1223 00:05:35.643222  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:35.643278  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:35.662223  687772 logs.go:282] 0 containers: []
	W1223 00:05:35.662245  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:35.662290  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:35.681991  687772 logs.go:282] 0 containers: []
	W1223 00:05:35.682016  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:35.682071  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:35.700985  687772 logs.go:282] 0 containers: []
	W1223 00:05:35.701009  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:35.701062  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:35.719976  687772 logs.go:282] 0 containers: []
	W1223 00:05:35.720000  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:35.720015  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:35.720029  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:35.767694  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:35.767728  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:35.792896  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:35.792935  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:35.849448  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:35.842971   16872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:35.843476   16872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:35.845024   16872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:35.845404   16872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:35.846511   16872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:35.842971   16872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:35.843476   16872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:35.845024   16872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:35.845404   16872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:35.846511   16872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:35.849470  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:35.849491  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:35.868248  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:35.868274  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:38.397175  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:38.408856  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:38.428054  687772 logs.go:282] 0 containers: []
	W1223 00:05:38.428085  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:38.428141  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:38.447350  687772 logs.go:282] 0 containers: []
	W1223 00:05:38.447376  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:38.447428  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:38.466426  687772 logs.go:282] 0 containers: []
	W1223 00:05:38.466455  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:38.466512  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:38.486074  687772 logs.go:282] 0 containers: []
	W1223 00:05:38.486104  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:38.486173  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:38.505584  687772 logs.go:282] 0 containers: []
	W1223 00:05:38.505626  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:38.505709  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:38.527387  687772 logs.go:282] 0 containers: []
	W1223 00:05:38.527416  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:38.527473  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:38.547928  687772 logs.go:282] 0 containers: []
	W1223 00:05:38.547955  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:38.548015  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:38.568237  687772 logs.go:282] 0 containers: []
	W1223 00:05:38.568262  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:38.568274  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:38.568285  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:38.616522  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:38.616555  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:38.638676  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:38.638707  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:38.694984  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:38.687773   17030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:38.688337   17030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:38.689839   17030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:38.690288   17030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:38.691876   17030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:38.687773   17030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:38.688337   17030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:38.689839   17030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:38.690288   17030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:38.691876   17030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:38.695006  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:38.695019  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:38.713940  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:38.713969  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:41.244859  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:41.256283  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:41.275201  687772 logs.go:282] 0 containers: []
	W1223 00:05:41.275233  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:41.275280  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:41.295272  687772 logs.go:282] 0 containers: []
	W1223 00:05:41.295299  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:41.295353  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:41.313039  687772 logs.go:282] 0 containers: []
	W1223 00:05:41.313069  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:41.313135  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:41.331394  687772 logs.go:282] 0 containers: []
	W1223 00:05:41.331418  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:41.331491  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:41.350556  687772 logs.go:282] 0 containers: []
	W1223 00:05:41.350583  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:41.350650  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:41.369215  687772 logs.go:282] 0 containers: []
	W1223 00:05:41.369242  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:41.369290  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:41.387799  687772 logs.go:282] 0 containers: []
	W1223 00:05:41.387826  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:41.387877  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:41.406760  687772 logs.go:282] 0 containers: []
	W1223 00:05:41.406785  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:41.406799  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:41.406813  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:41.453518  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:41.453548  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:41.473671  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:41.473700  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:41.531098  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:41.523365   17203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:41.523912   17203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:41.525536   17203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:41.526073   17203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:41.527560   17203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:41.523365   17203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:41.523912   17203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:41.525536   17203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:41.526073   17203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:41.527560   17203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:41.531124  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:41.531139  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:41.551968  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:41.551997  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:44.081115  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:44.092382  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:44.111299  687772 logs.go:282] 0 containers: []
	W1223 00:05:44.111326  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:44.111381  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:44.130168  687772 logs.go:282] 0 containers: []
	W1223 00:05:44.130196  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:44.130250  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:44.149028  687772 logs.go:282] 0 containers: []
	W1223 00:05:44.149052  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:44.149109  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:44.167326  687772 logs.go:282] 0 containers: []
	W1223 00:05:44.167346  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:44.167388  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:44.185875  687772 logs.go:282] 0 containers: []
	W1223 00:05:44.185898  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:44.185949  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:44.205297  687772 logs.go:282] 0 containers: []
	W1223 00:05:44.205320  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:44.205370  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:44.224561  687772 logs.go:282] 0 containers: []
	W1223 00:05:44.224608  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:44.224661  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:44.242760  687772 logs.go:282] 0 containers: []
	W1223 00:05:44.242782  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:44.242795  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:44.242808  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:44.290363  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:44.290399  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:44.310780  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:44.310806  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:44.367913  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:44.360501   17368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:44.361124   17368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:44.362755   17368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:44.363237   17368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:44.364761   17368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:44.360501   17368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:44.361124   17368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:44.362755   17368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:44.363237   17368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:44.364761   17368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:44.367931  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:44.367945  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:44.387052  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:44.387080  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:46.916305  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:46.927926  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:46.946856  687772 logs.go:282] 0 containers: []
	W1223 00:05:46.946882  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:46.946941  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:46.965651  687772 logs.go:282] 0 containers: []
	W1223 00:05:46.965674  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:46.965720  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:46.984835  687772 logs.go:282] 0 containers: []
	W1223 00:05:46.984863  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:46.984920  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:47.005005  687772 logs.go:282] 0 containers: []
	W1223 00:05:47.005033  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:47.005095  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:47.026916  687772 logs.go:282] 0 containers: []
	W1223 00:05:47.026948  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:47.026996  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:47.047971  687772 logs.go:282] 0 containers: []
	W1223 00:05:47.048003  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:47.048064  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:47.067344  687772 logs.go:282] 0 containers: []
	W1223 00:05:47.067372  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:47.067424  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:47.087055  687772 logs.go:282] 0 containers: []
	W1223 00:05:47.087079  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:47.087093  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:47.087107  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:47.134052  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:47.134085  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:47.154446  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:47.154479  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:47.210710  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:47.203541   17534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:47.204152   17534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:47.205769   17534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:47.206170   17534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:47.207683   17534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:47.203541   17534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:47.204152   17534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:47.205769   17534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:47.206170   17534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:47.207683   17534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:47.210734  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:47.210746  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:47.230988  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:47.231017  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:49.759465  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:49.771325  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:49.791131  687772 logs.go:282] 0 containers: []
	W1223 00:05:49.791160  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:49.791219  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:49.810792  687772 logs.go:282] 0 containers: []
	W1223 00:05:49.810814  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:49.810859  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:49.829432  687772 logs.go:282] 0 containers: []
	W1223 00:05:49.829454  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:49.829499  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:49.847527  687772 logs.go:282] 0 containers: []
	W1223 00:05:49.847548  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:49.847603  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:49.866252  687772 logs.go:282] 0 containers: []
	W1223 00:05:49.866275  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:49.866315  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:49.885934  687772 logs.go:282] 0 containers: []
	W1223 00:05:49.885955  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:49.885996  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:49.903668  687772 logs.go:282] 0 containers: []
	W1223 00:05:49.903690  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:49.903733  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:49.923276  687772 logs.go:282] 0 containers: []
	W1223 00:05:49.923298  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:49.923309  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:49.923320  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:49.968185  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:49.968217  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:49.988993  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:49.989021  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:50.052060  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:50.045040   17695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:50.045626   17695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:50.047194   17695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:50.047655   17695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:50.049139   17695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:50.045040   17695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:50.045626   17695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:50.047194   17695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:50.047655   17695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:50.049139   17695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:50.052083  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:50.052100  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:50.070860  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:50.070885  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:52.599679  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:52.611289  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:52.629699  687772 logs.go:282] 0 containers: []
	W1223 00:05:52.629724  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:52.629782  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:52.648660  687772 logs.go:282] 0 containers: []
	W1223 00:05:52.648689  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:52.648740  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:52.667204  687772 logs.go:282] 0 containers: []
	W1223 00:05:52.667232  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:52.667287  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:52.685635  687772 logs.go:282] 0 containers: []
	W1223 00:05:52.685667  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:52.685718  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:52.703669  687772 logs.go:282] 0 containers: []
	W1223 00:05:52.703692  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:52.703742  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:52.721467  687772 logs.go:282] 0 containers: []
	W1223 00:05:52.721495  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:52.721553  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:52.739858  687772 logs.go:282] 0 containers: []
	W1223 00:05:52.739885  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:52.739930  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:52.759123  687772 logs.go:282] 0 containers: []
	W1223 00:05:52.759151  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:52.759165  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:52.759178  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:52.812520  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:52.812552  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:52.832551  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:52.832578  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:52.887680  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:52.880327   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:52.880960   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:52.882578   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:52.883148   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:52.884700   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:52.880327   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:52.880960   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:52.882578   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:52.883148   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:52.884700   17864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:52.887700  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:52.887719  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:52.906246  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:52.906276  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:55.444344  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:55.455763  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:55.475305  687772 logs.go:282] 0 containers: []
	W1223 00:05:55.475332  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:55.475389  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:55.494094  687772 logs.go:282] 0 containers: []
	W1223 00:05:55.494117  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:55.494164  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:55.511874  687772 logs.go:282] 0 containers: []
	W1223 00:05:55.511896  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:55.511942  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:55.530088  687772 logs.go:282] 0 containers: []
	W1223 00:05:55.530113  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:55.530159  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:55.548749  687772 logs.go:282] 0 containers: []
	W1223 00:05:55.548778  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:55.548828  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:55.567179  687772 logs.go:282] 0 containers: []
	W1223 00:05:55.567204  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:55.567269  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:55.586315  687772 logs.go:282] 0 containers: []
	W1223 00:05:55.586343  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:55.586395  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:55.605282  687772 logs.go:282] 0 containers: []
	W1223 00:05:55.605303  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:55.605314  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:55.605327  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:55.624085  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:55.624113  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:55.652038  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:55.652065  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:05:55.699247  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:55.699274  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:55.719031  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:55.719058  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:55.777078  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:55.769272   18055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:55.769828   18055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:55.771491   18055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:55.771926   18055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:55.773469   18055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:55.769272   18055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:55.769828   18055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:55.771491   18055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:55.771926   18055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:55.773469   18055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:58.278708  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:05:58.291024  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:05:58.310944  687772 logs.go:282] 0 containers: []
	W1223 00:05:58.310971  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:05:58.311027  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:05:58.329419  687772 logs.go:282] 0 containers: []
	W1223 00:05:58.329443  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:05:58.329499  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:05:58.346556  687772 logs.go:282] 0 containers: []
	W1223 00:05:58.346579  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:05:58.346653  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:05:58.364565  687772 logs.go:282] 0 containers: []
	W1223 00:05:58.364601  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:05:58.364653  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:05:58.383020  687772 logs.go:282] 0 containers: []
	W1223 00:05:58.383043  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:05:58.383089  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:05:58.401354  687772 logs.go:282] 0 containers: []
	W1223 00:05:58.401381  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:05:58.401440  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:05:58.419356  687772 logs.go:282] 0 containers: []
	W1223 00:05:58.419377  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:05:58.419426  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:05:58.438428  687772 logs.go:282] 0 containers: []
	W1223 00:05:58.438449  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:05:58.438461  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:05:58.438477  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:05:58.458325  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:05:58.458353  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:05:58.513127  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:05:58.506001   18204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:58.506523   18204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:58.508086   18204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:58.508549   18204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:58.510071   18204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:05:58.506001   18204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:58.506523   18204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:58.508086   18204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:58.508549   18204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:05:58.510071   18204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:05:58.513156  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:05:58.513173  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:05:58.532159  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:05:58.532183  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:05:58.559409  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:05:58.559433  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:01.105933  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:01.117378  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:01.136395  687772 logs.go:282] 0 containers: []
	W1223 00:06:01.136418  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:01.136463  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:01.155037  687772 logs.go:282] 0 containers: []
	W1223 00:06:01.155063  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:01.155111  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:01.173939  687772 logs.go:282] 0 containers: []
	W1223 00:06:01.173960  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:01.174004  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:01.193250  687772 logs.go:282] 0 containers: []
	W1223 00:06:01.193271  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:01.193312  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:01.210927  687772 logs.go:282] 0 containers: []
	W1223 00:06:01.210948  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:01.210990  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:01.229293  687772 logs.go:282] 0 containers: []
	W1223 00:06:01.229319  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:01.229367  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:01.247971  687772 logs.go:282] 0 containers: []
	W1223 00:06:01.247997  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:01.248059  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:01.267642  687772 logs.go:282] 0 containers: []
	W1223 00:06:01.267667  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:01.267688  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:01.267718  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:01.290552  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:01.290581  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:01.346096  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:01.339164   18374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:01.339647   18374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:01.341218   18374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:01.341667   18374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:01.343153   18374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:01.339164   18374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:01.339647   18374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:01.341218   18374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:01.341667   18374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:01.343153   18374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:01.346115  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:01.346127  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:01.364490  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:01.364516  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:01.391895  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:01.391918  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:03.938979  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:03.950393  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:03.969334  687772 logs.go:282] 0 containers: []
	W1223 00:06:03.969364  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:03.969448  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:03.988183  687772 logs.go:282] 0 containers: []
	W1223 00:06:03.988205  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:03.988252  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:04.007742  687772 logs.go:282] 0 containers: []
	W1223 00:06:04.007767  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:04.007821  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:04.027502  687772 logs.go:282] 0 containers: []
	W1223 00:06:04.027528  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:04.027582  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:04.048194  687772 logs.go:282] 0 containers: []
	W1223 00:06:04.048222  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:04.048286  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:04.067020  687772 logs.go:282] 0 containers: []
	W1223 00:06:04.067044  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:04.067096  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:04.085747  687772 logs.go:282] 0 containers: []
	W1223 00:06:04.085776  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:04.085829  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:04.103906  687772 logs.go:282] 0 containers: []
	W1223 00:06:04.103936  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:04.103950  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:04.103963  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:04.131404  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:04.131427  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:04.178862  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:04.178893  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:04.198797  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:04.198823  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:04.255150  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:04.247324   18547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:04.247911   18547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:04.249519   18547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:04.249945   18547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:04.251469   18547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:04.247324   18547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:04.247911   18547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:04.249519   18547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:04.249945   18547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:04.251469   18547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:04.255174  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:04.255190  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:06.777149  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:06.788444  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:06.807818  687772 logs.go:282] 0 containers: []
	W1223 00:06:06.807839  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:06.807881  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:06.827018  687772 logs.go:282] 0 containers: []
	W1223 00:06:06.827044  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:06.827092  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:06.845320  687772 logs.go:282] 0 containers: []
	W1223 00:06:06.845342  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:06.845395  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:06.862837  687772 logs.go:282] 0 containers: []
	W1223 00:06:06.862856  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:06.862907  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:06.880629  687772 logs.go:282] 0 containers: []
	W1223 00:06:06.880649  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:06.880690  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:06.898665  687772 logs.go:282] 0 containers: []
	W1223 00:06:06.898694  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:06.898762  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:06.916571  687772 logs.go:282] 0 containers: []
	W1223 00:06:06.916606  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:06.916662  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:06.934190  687772 logs.go:282] 0 containers: []
	W1223 00:06:06.934213  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:06.934228  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:06.934245  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:06.961869  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:06.961895  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:07.008426  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:07.008460  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:07.033602  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:07.033641  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:07.089432  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:07.082227   18715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:07.082831   18715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:07.084421   18715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:07.084867   18715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:07.086345   18715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:07.082227   18715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:07.082831   18715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:07.084421   18715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:07.084867   18715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:07.086345   18715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:07.089452  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:07.089463  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:09.608089  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:09.619510  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:09.638402  687772 logs.go:282] 0 containers: []
	W1223 00:06:09.638426  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:09.638473  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:09.657218  687772 logs.go:282] 0 containers: []
	W1223 00:06:09.657247  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:09.657292  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:09.675838  687772 logs.go:282] 0 containers: []
	W1223 00:06:09.675871  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:09.675935  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:09.694913  687772 logs.go:282] 0 containers: []
	W1223 00:06:09.694939  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:09.694992  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:09.714024  687772 logs.go:282] 0 containers: []
	W1223 00:06:09.714046  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:09.714097  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:09.733120  687772 logs.go:282] 0 containers: []
	W1223 00:06:09.733142  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:09.733188  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:09.752081  687772 logs.go:282] 0 containers: []
	W1223 00:06:09.752104  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:09.752148  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:09.770630  687772 logs.go:282] 0 containers: []
	W1223 00:06:09.770661  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:09.770676  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:09.770700  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:09.818931  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:09.818967  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:09.839282  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:09.839309  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:09.895206  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:09.888285   18867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:09.888810   18867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:09.890309   18867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:09.890779   18867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:09.891942   18867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:09.888285   18867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:09.888810   18867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:09.890309   18867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:09.890779   18867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:09.891942   18867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:09.895234  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:09.895247  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:09.913965  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:09.913994  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:12.442178  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:12.453355  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:12.472243  687772 logs.go:282] 0 containers: []
	W1223 00:06:12.472267  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:12.472312  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:12.491113  687772 logs.go:282] 0 containers: []
	W1223 00:06:12.491136  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:12.491192  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:12.511291  687772 logs.go:282] 0 containers: []
	W1223 00:06:12.511317  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:12.511376  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:12.532112  687772 logs.go:282] 0 containers: []
	W1223 00:06:12.532141  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:12.532196  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:12.551226  687772 logs.go:282] 0 containers: []
	W1223 00:06:12.551250  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:12.551293  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:12.569426  687772 logs.go:282] 0 containers: []
	W1223 00:06:12.569449  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:12.569504  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:12.588494  687772 logs.go:282] 0 containers: []
	W1223 00:06:12.588520  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:12.588569  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:12.606610  687772 logs.go:282] 0 containers: []
	W1223 00:06:12.606644  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:12.606657  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:12.606674  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:12.634113  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:12.634143  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:12.681112  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:12.681140  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:12.700711  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:12.700736  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:12.757239  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:12.749780   19051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:12.750485   19051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:12.752070   19051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:12.752530   19051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:12.754079   19051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:12.749780   19051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:12.750485   19051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:12.752070   19051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:12.752530   19051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:12.754079   19051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:12.757259  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:12.757273  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:15.278124  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:15.290283  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:15.309406  687772 logs.go:282] 0 containers: []
	W1223 00:06:15.309433  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:15.309481  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:15.328093  687772 logs.go:282] 0 containers: []
	W1223 00:06:15.328119  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:15.328173  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:15.346922  687772 logs.go:282] 0 containers: []
	W1223 00:06:15.346949  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:15.347006  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:15.364932  687772 logs.go:282] 0 containers: []
	W1223 00:06:15.364960  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:15.365013  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:15.383120  687772 logs.go:282] 0 containers: []
	W1223 00:06:15.383144  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:15.383188  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:15.401332  687772 logs.go:282] 0 containers: []
	W1223 00:06:15.401355  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:15.401404  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:15.419961  687772 logs.go:282] 0 containers: []
	W1223 00:06:15.419986  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:15.420037  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:15.438746  687772 logs.go:282] 0 containers: []
	W1223 00:06:15.438769  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:15.438780  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:15.438793  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:15.486016  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:15.486044  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:15.506911  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:15.506939  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:15.566808  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:15.559320   19201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:15.559937   19201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:15.561686   19201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:15.562103   19201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:15.563650   19201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:15.559320   19201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:15.559937   19201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:15.561686   19201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:15.562103   19201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:15.563650   19201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:15.566826  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:15.566836  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:15.586013  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:15.586040  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:18.115753  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:18.127221  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:18.146018  687772 logs.go:282] 0 containers: []
	W1223 00:06:18.146048  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:18.146094  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:18.165274  687772 logs.go:282] 0 containers: []
	W1223 00:06:18.165294  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:18.165337  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:18.183880  687772 logs.go:282] 0 containers: []
	W1223 00:06:18.183904  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:18.183947  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:18.202061  687772 logs.go:282] 0 containers: []
	W1223 00:06:18.202082  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:18.202130  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:18.219858  687772 logs.go:282] 0 containers: []
	W1223 00:06:18.219892  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:18.219945  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:18.238966  687772 logs.go:282] 0 containers: []
	W1223 00:06:18.238987  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:18.239032  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:18.260921  687772 logs.go:282] 0 containers: []
	W1223 00:06:18.260949  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:18.260997  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:18.280705  687772 logs.go:282] 0 containers: []
	W1223 00:06:18.280735  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:18.280750  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:18.280764  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:18.299732  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:18.299756  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:18.327603  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:18.327631  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:18.375722  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:18.375749  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:18.397572  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:18.397611  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:18.454135  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:18.447039   19376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:18.447614   19376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:18.449142   19376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:18.449559   19376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:18.451077   19376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:18.447039   19376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:18.447614   19376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:18.449142   19376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:18.449559   19376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:18.451077   19376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:20.955833  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:20.967309  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:20.986237  687772 logs.go:282] 0 containers: []
	W1223 00:06:20.986258  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:20.986301  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:21.004350  687772 logs.go:282] 0 containers: []
	W1223 00:06:21.004377  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:21.004434  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:21.022893  687772 logs.go:282] 0 containers: []
	W1223 00:06:21.022919  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:21.022974  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:21.042421  687772 logs.go:282] 0 containers: []
	W1223 00:06:21.042441  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:21.042484  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:21.061267  687772 logs.go:282] 0 containers: []
	W1223 00:06:21.061293  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:21.061355  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:21.079988  687772 logs.go:282] 0 containers: []
	W1223 00:06:21.080011  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:21.080064  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:21.098196  687772 logs.go:282] 0 containers: []
	W1223 00:06:21.098225  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:21.098279  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:21.117158  687772 logs.go:282] 0 containers: []
	W1223 00:06:21.117180  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:21.117191  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:21.117202  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:21.146189  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:21.146215  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:21.192645  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:21.192677  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:21.212689  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:21.212716  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:21.269438  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:21.261783   19545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:21.262320   19545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:21.263990   19545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:21.264498   19545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:21.266022   19545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:21.261783   19545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:21.262320   19545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:21.263990   19545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:21.264498   19545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:21.266022   19545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:21.269462  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:21.269480  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:23.789716  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:23.801130  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:23.820155  687772 logs.go:282] 0 containers: []
	W1223 00:06:23.820180  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:23.820239  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:23.838850  687772 logs.go:282] 0 containers: []
	W1223 00:06:23.838875  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:23.838919  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:23.856860  687772 logs.go:282] 0 containers: []
	W1223 00:06:23.856881  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:23.856931  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:23.874630  687772 logs.go:282] 0 containers: []
	W1223 00:06:23.874653  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:23.874700  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:23.893425  687772 logs.go:282] 0 containers: []
	W1223 00:06:23.893454  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:23.893521  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:23.912712  687772 logs.go:282] 0 containers: []
	W1223 00:06:23.912734  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:23.912789  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:23.931097  687772 logs.go:282] 0 containers: []
	W1223 00:06:23.931124  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:23.931178  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:23.949113  687772 logs.go:282] 0 containers: []
	W1223 00:06:23.949138  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:23.949152  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:23.949168  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:23.996109  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:23.996137  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:24.016228  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:24.016254  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:24.071647  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:24.064286   19696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:24.064800   19696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:24.066333   19696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:24.066786   19696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:24.068314   19696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:24.064286   19696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:24.064800   19696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:24.066333   19696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:24.066786   19696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:24.068314   19696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:24.071665  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:24.071680  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:24.090918  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:24.090944  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:26.624354  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:26.635840  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:26.654444  687772 logs.go:282] 0 containers: []
	W1223 00:06:26.654473  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:26.654537  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:26.673364  687772 logs.go:282] 0 containers: []
	W1223 00:06:26.673388  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:26.673436  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:26.692467  687772 logs.go:282] 0 containers: []
	W1223 00:06:26.692489  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:26.692539  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:26.711627  687772 logs.go:282] 0 containers: []
	W1223 00:06:26.711656  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:26.711709  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:26.730302  687772 logs.go:282] 0 containers: []
	W1223 00:06:26.730332  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:26.730386  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:26.748910  687772 logs.go:282] 0 containers: []
	W1223 00:06:26.748939  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:26.748995  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:26.768525  687772 logs.go:282] 0 containers: []
	W1223 00:06:26.768548  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:26.768603  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:26.788434  687772 logs.go:282] 0 containers: []
	W1223 00:06:26.788462  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:26.788476  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:26.788491  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:26.845463  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:26.838499   19858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:26.838989   19858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:26.840494   19858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:26.840922   19858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:26.842389   19858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:26.838499   19858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:26.838989   19858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:26.840494   19858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:26.840922   19858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:26.842389   19858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:26.845482  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:26.845494  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:26.864140  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:26.864167  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:26.890448  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:26.890476  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:26.937390  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:26.937422  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:29.457766  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:29.469205  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:29.488353  687772 logs.go:282] 0 containers: []
	W1223 00:06:29.488376  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:29.488431  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:29.508035  687772 logs.go:282] 0 containers: []
	W1223 00:06:29.508059  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:29.508114  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:29.528210  687772 logs.go:282] 0 containers: []
	W1223 00:06:29.528234  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:29.528280  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:29.546344  687772 logs.go:282] 0 containers: []
	W1223 00:06:29.546370  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:29.546432  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:29.565125  687772 logs.go:282] 0 containers: []
	W1223 00:06:29.565153  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:29.565200  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:29.584111  687772 logs.go:282] 0 containers: []
	W1223 00:06:29.584142  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:29.584195  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:29.602714  687772 logs.go:282] 0 containers: []
	W1223 00:06:29.602735  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:29.602778  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:29.621012  687772 logs.go:282] 0 containers: []
	W1223 00:06:29.621042  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:29.621058  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:29.621073  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:29.669132  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:29.669168  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:29.689406  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:29.689431  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:29.746681  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:29.739833   20022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:29.740362   20022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:29.741927   20022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:29.742376   20022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:29.743569   20022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:29.739833   20022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:29.740362   20022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:29.741927   20022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:29.742376   20022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:29.743569   20022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:29.746703  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:29.746720  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:29.765762  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:29.765793  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:32.299443  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:32.310848  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:32.330298  687772 logs.go:282] 0 containers: []
	W1223 00:06:32.330326  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:32.330380  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:32.349664  687772 logs.go:282] 0 containers: []
	W1223 00:06:32.349692  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:32.349745  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:32.367944  687772 logs.go:282] 0 containers: []
	W1223 00:06:32.367969  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:32.368081  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:32.386919  687772 logs.go:282] 0 containers: []
	W1223 00:06:32.386940  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:32.386983  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:32.405416  687772 logs.go:282] 0 containers: []
	W1223 00:06:32.405440  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:32.405487  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:32.423080  687772 logs.go:282] 0 containers: []
	W1223 00:06:32.423100  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:32.423144  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:32.441255  687772 logs.go:282] 0 containers: []
	W1223 00:06:32.441282  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:32.441336  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:32.459763  687772 logs.go:282] 0 containers: []
	W1223 00:06:32.459789  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:32.459801  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:32.459812  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:32.507284  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:32.507314  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:32.529983  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:32.530014  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:32.587816  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:32.580635   20192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:32.581177   20192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:32.582743   20192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:32.583222   20192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:32.584764   20192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:32.580635   20192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:32.581177   20192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:32.582743   20192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:32.583222   20192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:32.584764   20192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:32.587843  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:32.587860  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:32.607796  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:32.607826  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:35.136489  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:35.147976  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:35.166774  687772 logs.go:282] 0 containers: []
	W1223 00:06:35.166794  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:35.166846  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:35.185872  687772 logs.go:282] 0 containers: []
	W1223 00:06:35.185899  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:35.185949  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:35.204053  687772 logs.go:282] 0 containers: []
	W1223 00:06:35.204074  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:35.204115  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:35.223056  687772 logs.go:282] 0 containers: []
	W1223 00:06:35.223077  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:35.223126  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:35.241616  687772 logs.go:282] 0 containers: []
	W1223 00:06:35.241645  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:35.241699  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:35.260422  687772 logs.go:282] 0 containers: []
	W1223 00:06:35.260476  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:35.260536  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:35.279168  687772 logs.go:282] 0 containers: []
	W1223 00:06:35.279192  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:35.279238  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:35.297208  687772 logs.go:282] 0 containers: []
	W1223 00:06:35.297236  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:35.297252  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:35.297267  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:35.317273  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:35.317299  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:35.374319  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:35.365790   20361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:35.367609   20361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:35.368076   20361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:35.369665   20361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:35.370105   20361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:35.365790   20361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:35.367609   20361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:35.368076   20361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:35.369665   20361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:35.370105   20361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:35.374337  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:35.374349  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:35.393025  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:35.393050  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:35.420499  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:35.420537  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:37.968117  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:37.979448  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:37.998789  687772 logs.go:282] 0 containers: []
	W1223 00:06:37.998815  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:37.998861  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:38.019815  687772 logs.go:282] 0 containers: []
	W1223 00:06:38.019847  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:38.019910  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:38.042524  687772 logs.go:282] 0 containers: []
	W1223 00:06:38.042552  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:38.042617  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:38.061464  687772 logs.go:282] 0 containers: []
	W1223 00:06:38.061489  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:38.061544  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:38.080482  687772 logs.go:282] 0 containers: []
	W1223 00:06:38.080509  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:38.080558  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:38.099189  687772 logs.go:282] 0 containers: []
	W1223 00:06:38.099215  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:38.099279  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:38.118161  687772 logs.go:282] 0 containers: []
	W1223 00:06:38.118188  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:38.118244  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:38.136752  687772 logs.go:282] 0 containers: []
	W1223 00:06:38.136786  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:38.136803  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:38.136819  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:38.182751  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:38.182779  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:38.202352  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:38.202375  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:38.257901  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:38.250382   20532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:38.251009   20532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:38.252656   20532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:38.253166   20532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:38.254694   20532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:38.250382   20532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:38.251009   20532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:38.252656   20532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:38.253166   20532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:38.254694   20532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:38.257922  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:38.257933  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:38.276963  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:38.276988  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:40.806792  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:40.818244  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1223 00:06:40.837324  687772 logs.go:282] 0 containers: []
	W1223 00:06:40.837348  687772 logs.go:284] No container was found matching "kube-apiserver"
	I1223 00:06:40.837402  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1223 00:06:40.856364  687772 logs.go:282] 0 containers: []
	W1223 00:06:40.856387  687772 logs.go:284] No container was found matching "etcd"
	I1223 00:06:40.856453  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1223 00:06:40.874753  687772 logs.go:282] 0 containers: []
	W1223 00:06:40.874780  687772 logs.go:284] No container was found matching "coredns"
	I1223 00:06:40.874831  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1223 00:06:40.893167  687772 logs.go:282] 0 containers: []
	W1223 00:06:40.893193  687772 logs.go:284] No container was found matching "kube-scheduler"
	I1223 00:06:40.893242  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1223 00:06:40.910901  687772 logs.go:282] 0 containers: []
	W1223 00:06:40.910924  687772 logs.go:284] No container was found matching "kube-proxy"
	I1223 00:06:40.910976  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1223 00:06:40.930108  687772 logs.go:282] 0 containers: []
	W1223 00:06:40.930133  687772 logs.go:284] No container was found matching "kube-controller-manager"
	I1223 00:06:40.930191  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1223 00:06:40.949021  687772 logs.go:282] 0 containers: []
	W1223 00:06:40.949047  687772 logs.go:284] No container was found matching "kindnet"
	I1223 00:06:40.949101  687772 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1223 00:06:40.967221  687772 logs.go:282] 0 containers: []
	W1223 00:06:40.967246  687772 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1223 00:06:40.967260  687772 logs.go:123] Gathering logs for dmesg ...
	I1223 00:06:40.967276  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1223 00:06:40.988752  687772 logs.go:123] Gathering logs for describe nodes ...
	I1223 00:06:40.988779  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1223 00:06:41.048349  687772 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:06:41.040501   20693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:41.041135   20693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:41.042880   20693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:41.043313   20693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:41.044930   20693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1223 00:06:41.040501   20693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:41.041135   20693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:41.042880   20693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:41.043313   20693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:06:41.044930   20693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1223 00:06:41.048374  687772 logs.go:123] Gathering logs for Docker ...
	I1223 00:06:41.048387  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1223 00:06:41.067112  687772 logs.go:123] Gathering logs for container status ...
	I1223 00:06:41.067138  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1223 00:06:41.093421  687772 logs.go:123] Gathering logs for kubelet ...
	I1223 00:06:41.093445  687772 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1223 00:06:43.639363  687772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1223 00:06:43.653263  687772 out.go:203] 
	W1223 00:06:43.654345  687772 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1223 00:06:43.654374  687772 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1223 00:06:43.654383  687772 out.go:285] * Related issues:
	W1223 00:06:43.654397  687772 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1223 00:06:43.654411  687772 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1223 00:06:43.655505  687772 out.go:203] 
	
	
	==> Docker <==
	Dec 22 23:56:13 no-preload-063943 dockerd[910]: time="2025-12-22T23:56:13.023421578Z" level=info msg="Restoring containers: start."
	Dec 22 23:56:13 no-preload-063943 dockerd[910]: time="2025-12-22T23:56:13.041299594Z" level=info msg="Deleting nftables IPv4 rules" error="exit status 1"
	Dec 22 23:56:13 no-preload-063943 dockerd[910]: time="2025-12-22T23:56:13.059213099Z" level=info msg="Deleting nftables IPv6 rules" error="exit status 1"
	Dec 22 23:56:13 no-preload-063943 dockerd[910]: time="2025-12-22T23:56:13.611358548Z" level=info msg="Loading containers: done."
	Dec 22 23:56:13 no-preload-063943 dockerd[910]: time="2025-12-22T23:56:13.620920645Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 22 23:56:13 no-preload-063943 dockerd[910]: time="2025-12-22T23:56:13.620957367Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 22 23:56:13 no-preload-063943 dockerd[910]: time="2025-12-22T23:56:13.620991634Z" level=info msg="Initializing buildkit"
	Dec 22 23:56:13 no-preload-063943 dockerd[910]: time="2025-12-22T23:56:13.639509005Z" level=info msg="Completed buildkit initialization"
	Dec 22 23:56:13 no-preload-063943 dockerd[910]: time="2025-12-22T23:56:13.645541635Z" level=info msg="Daemon has completed initialization"
	Dec 22 23:56:13 no-preload-063943 dockerd[910]: time="2025-12-22T23:56:13.645622881Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 22 23:56:13 no-preload-063943 dockerd[910]: time="2025-12-22T23:56:13.645627452Z" level=info msg="API listen on /run/docker.sock"
	Dec 22 23:56:13 no-preload-063943 dockerd[910]: time="2025-12-22T23:56:13.645628833Z" level=info msg="API listen on [::]:2376"
	Dec 22 23:56:13 no-preload-063943 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 22 23:56:14 no-preload-063943 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 22 23:56:14 no-preload-063943 cri-dockerd[1203]: time="2025-12-22T23:56:14Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 22 23:56:14 no-preload-063943 cri-dockerd[1203]: time="2025-12-22T23:56:14Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 22 23:56:14 no-preload-063943 cri-dockerd[1203]: time="2025-12-22T23:56:14Z" level=info msg="Start docker client with request timeout 0s"
	Dec 22 23:56:14 no-preload-063943 cri-dockerd[1203]: time="2025-12-22T23:56:14Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 22 23:56:14 no-preload-063943 cri-dockerd[1203]: time="2025-12-22T23:56:14Z" level=info msg="Loaded network plugin cni"
	Dec 22 23:56:14 no-preload-063943 cri-dockerd[1203]: time="2025-12-22T23:56:14Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 22 23:56:14 no-preload-063943 cri-dockerd[1203]: time="2025-12-22T23:56:14Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 22 23:56:14 no-preload-063943 cri-dockerd[1203]: time="2025-12-22T23:56:14Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 22 23:56:14 no-preload-063943 cri-dockerd[1203]: time="2025-12-22T23:56:14Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 22 23:56:14 no-preload-063943 cri-dockerd[1203]: time="2025-12-22T23:56:14Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 22 23:56:14 no-preload-063943 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1223 00:15:50.385539   20034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:15:50.386092   20034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:15:50.387655   20034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:15:50.388110   20034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1223 00:15:50.389885   20034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 32 44 b0 85 99 75 08 06
	[  +2.519484] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ca 64 f4 88 60 6a 08 06
	[  +0.000472] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 42 41 81 ba 80 a4 08 06
	[Dec22 23:59] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e 60 1e 9e f0 0c 08 06
	[  +0.088099] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff f6 12 57 26 ed f1 08 06
	[  +5.341024] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 46 24 97 27 5a ed 08 06
	[ +14.537406] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff da 72 df 3b 35 8d 08 06
	[  +0.000388] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 5e 60 1e 9e f0 0c 08 06
	[  +2.465032] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 84 3f 6a 28 22 08 06
	[  +0.000373] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 46 24 97 27 5a ed 08 06
	[Dec23 00:00] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 4e 53 f0 1e af dd 08 06
	[Dec23 00:01] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 20 71 68 66 a5 08 06
	[  +0.000346] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 4e 53 f0 1e af dd 08 06
	
	
	==> kernel <==
	 00:15:50 up  3:58,  0 user,  load average: 0.09, 0.26, 0.93
	Linux no-preload-063943 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 23 00:15:46 no-preload-063943 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 23 00:15:47 no-preload-063943 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1564.
	Dec 23 00:15:47 no-preload-063943 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 23 00:15:47 no-preload-063943 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 23 00:15:47 no-preload-063943 kubelet[19839]: E1223 00:15:47.538588   19839 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 23 00:15:47 no-preload-063943 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 23 00:15:47 no-preload-063943 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 23 00:15:48 no-preload-063943 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1565.
	Dec 23 00:15:48 no-preload-063943 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 23 00:15:48 no-preload-063943 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 23 00:15:48 no-preload-063943 kubelet[19850]: E1223 00:15:48.285662   19850 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 23 00:15:48 no-preload-063943 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 23 00:15:48 no-preload-063943 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 23 00:15:48 no-preload-063943 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1566.
	Dec 23 00:15:48 no-preload-063943 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 23 00:15:48 no-preload-063943 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 23 00:15:49 no-preload-063943 kubelet[19876]: E1223 00:15:49.038954   19876 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 23 00:15:49 no-preload-063943 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 23 00:15:49 no-preload-063943 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 23 00:15:49 no-preload-063943 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1567.
	Dec 23 00:15:49 no-preload-063943 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 23 00:15:49 no-preload-063943 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 23 00:15:49 no-preload-063943 kubelet[19902]: E1223 00:15:49.796884   19902 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 23 00:15:49 no-preload-063943 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 23 00:15:49 no-preload-063943 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-063943 -n no-preload-063943
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-063943 -n no-preload-063943: exit status 2 (298.309168ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "no-preload-063943" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (271.17s)

                                                
                                    

Test pass (371/436)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 29.68
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.3/json-events 11.34
13 TestDownloadOnly/v1.34.3/preload-exists 0
17 TestDownloadOnly/v1.34.3/LogsDuration 0.07
18 TestDownloadOnly/v1.34.3/DeleteAll 0.22
19 TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.35.0-rc.1/json-events 11.87
22 TestDownloadOnly/v1.35.0-rc.1/preload-exists 0
26 TestDownloadOnly/v1.35.0-rc.1/LogsDuration 0.07
27 TestDownloadOnly/v1.35.0-rc.1/DeleteAll 0.21
28 TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds 0.14
29 TestDownloadOnlyKic 0.39
30 TestBinaryMirror 0.81
31 TestOffline 82.63
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 144.89
38 TestAddons/serial/Volcano 41.98
40 TestAddons/serial/GCPAuth/Namespaces 0.11
41 TestAddons/serial/GCPAuth/FakeCredentials 10.48
44 TestAddons/parallel/Registry 48.68
45 TestAddons/parallel/RegistryCreds 0.59
46 TestAddons/parallel/Ingress 50.7
47 TestAddons/parallel/InspektorGadget 10.64
48 TestAddons/parallel/MetricsServer 5.63
50 TestAddons/parallel/CSI 58.18
51 TestAddons/parallel/Headlamp 46.4
52 TestAddons/parallel/CloudSpanner 5.47
53 TestAddons/parallel/LocalPath 35.12
54 TestAddons/parallel/NvidiaDevicePlugin 6.49
55 TestAddons/parallel/Yakd 10.63
56 TestAddons/parallel/AmdGpuDevicePlugin 5.43
57 TestAddons/StoppedEnableDisable 11.27
58 TestCertOptions 31.45
59 TestCertExpiration 237.22
60 TestDockerFlags 29.25
61 TestForceSystemdFlag 30.1
62 TestForceSystemdEnv 27.59
67 TestErrorSpam/setup 26.08
68 TestErrorSpam/start 0.63
69 TestErrorSpam/status 0.95
70 TestErrorSpam/pause 1.2
71 TestErrorSpam/unpause 1.62
72 TestErrorSpam/stop 11.12
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 38.8
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 40.03
79 TestFunctional/serial/KubeContext 0.05
80 TestFunctional/serial/KubectlGetPods 0.07
83 TestFunctional/serial/CacheCmd/cache/add_remote 2.33
84 TestFunctional/serial/CacheCmd/cache/add_local 1.72
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.36
89 TestFunctional/serial/CacheCmd/cache/delete 0.12
90 TestFunctional/serial/MinikubeKubectlCmd 0.12
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
92 TestFunctional/serial/ExtraConfig 42.18
93 TestFunctional/serial/ComponentHealth 0.06
94 TestFunctional/serial/LogsCmd 1.01
95 TestFunctional/serial/LogsFileCmd 1.03
96 TestFunctional/serial/InvalidService 4.77
98 TestFunctional/parallel/ConfigCmd 0.43
99 TestFunctional/parallel/DashboardCmd 10.98
100 TestFunctional/parallel/DryRun 0.38
101 TestFunctional/parallel/InternationalLanguage 0.16
102 TestFunctional/parallel/StatusCmd 0.97
106 TestFunctional/parallel/ServiceCmdConnect 39.52
107 TestFunctional/parallel/AddonsCmd 0.15
108 TestFunctional/parallel/PersistentVolumeClaim 60.04
110 TestFunctional/parallel/SSHCmd 0.53
111 TestFunctional/parallel/CpCmd 1.78
112 TestFunctional/parallel/MySQL 29.98
113 TestFunctional/parallel/FileSync 0.31
114 TestFunctional/parallel/CertSync 1.81
118 TestFunctional/parallel/NodeLabels 0.07
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.29
122 TestFunctional/parallel/License 0.9
123 TestFunctional/parallel/DockerEnv/bash 1.05
124 TestFunctional/parallel/UpdateContextCmd/no_changes 0.17
125 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.19
126 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.18
127 TestFunctional/parallel/Version/short 0.08
128 TestFunctional/parallel/Version/components 0.74
129 TestFunctional/parallel/MountCmd/any-port 39.75
130 TestFunctional/parallel/ServiceCmd/DeployApp 24.13
131 TestFunctional/parallel/MountCmd/specific-port 1.86
132 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
133 TestFunctional/parallel/ProfileCmd/profile_list 0.45
134 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
135 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
136 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
137 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
138 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
139 TestFunctional/parallel/ImageCommands/ImageBuild 5.29
140 TestFunctional/parallel/ImageCommands/Setup 30.85
141 TestFunctional/parallel/MountCmd/VerifyCleanup 1.64
143 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.42
144 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
146 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 7.19
147 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
148 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
152 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
153 TestFunctional/parallel/ServiceCmd/List 0.93
154 TestFunctional/parallel/ServiceCmd/JSONOutput 1.73
155 TestFunctional/parallel/ServiceCmd/HTTPS 0.54
156 TestFunctional/parallel/ServiceCmd/Format 0.55
157 TestFunctional/parallel/ServiceCmd/URL 0.66
158 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.89
159 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.78
160 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.76
161 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.32
162 TestFunctional/parallel/ImageCommands/ImageRemove 0.44
163 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.58
164 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.36
165 TestFunctional/delete_echo-server_images 0.04
166 TestFunctional/delete_my-image_image 0.02
167 TestFunctional/delete_minikube_cached_images 0.02
171 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile 0
173 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog 0
175 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext 0.05
179 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote 2.28
180 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local 1.67
181 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete 0.06
182 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list 0.06
183 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node 0.29
184 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload 1.34
185 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete 0.13
190 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd 0.78
191 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd 0.76
194 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd 0.47
196 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun 0.37
197 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage 0.17
203 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd 0.2
206 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd 0.73
207 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd 2.07
209 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync 0.33
210 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync 2.07
216 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled 0.29
218 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License 0.59
220 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes 0.15
221 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster 0.16
222 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters 0.17
225 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel 0
235 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create 0.46
236 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list 0.39
237 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output 0.45
239 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port 1.88
240 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup 1.81
241 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short 0.07
242 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components 0.52
243 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort 0.24
244 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable 0.24
245 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson 0.25
246 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml 0.24
247 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild 5.31
248 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup 1.12
249 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon 1.05
250 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon 0.8
251 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon 1.75
252 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile 0.33
253 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove 0.46
254 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile 0.6
255 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon 0.37
259 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel 0.11
260 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images 0.04
261 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image 0.02
262 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images 0.02
266 TestMultiControlPlane/serial/StartCluster 162.02
267 TestMultiControlPlane/serial/DeployApp 9.54
268 TestMultiControlPlane/serial/PingHostFromPods 1.27
269 TestMultiControlPlane/serial/AddWorkerNode 33.93
270 TestMultiControlPlane/serial/NodeLabels 0.07
271 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.91
272 TestMultiControlPlane/serial/CopyFile 17.94
273 TestMultiControlPlane/serial/StopSecondaryNode 11.71
274 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.73
275 TestMultiControlPlane/serial/RestartSecondaryNode 40.76
276 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.99
277 TestMultiControlPlane/serial/RestartClusterKeepsNodes 147.11
278 TestMultiControlPlane/serial/DeleteSecondaryNode 9.47
279 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.7
280 TestMultiControlPlane/serial/StopCluster 33.65
281 TestMultiControlPlane/serial/RestartCluster 77.3
282 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.72
283 TestMultiControlPlane/serial/AddSecondaryNode 48.3
284 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.9
287 TestImageBuild/serial/Setup 25
288 TestImageBuild/serial/NormalBuild 1.07
289 TestImageBuild/serial/BuildWithBuildArg 0.68
290 TestImageBuild/serial/BuildWithDockerIgnore 0.5
291 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.49
296 TestJSONOutput/start/Command 67.6
297 TestJSONOutput/start/Audit 0
299 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
300 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
302 TestJSONOutput/pause/Command 0.54
303 TestJSONOutput/pause/Audit 0
305 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
306 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
308 TestJSONOutput/unpause/Command 0.53
309 TestJSONOutput/unpause/Audit 0
311 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
312 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
314 TestJSONOutput/stop/Command 11.06
315 TestJSONOutput/stop/Audit 0
317 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
318 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
319 TestErrorJSONOutput 0.23
321 TestKicCustomNetwork/create_custom_network 27.86
322 TestKicCustomNetwork/use_default_bridge_network 27.24
323 TestKicExistingNetwork 24.17
324 TestKicCustomSubnet 29.84
325 TestKicStaticIP 29.29
326 TestMainNoArgs 0.06
327 TestMinikubeProfile 58.11
330 TestMountStart/serial/StartWithMountFirst 9.38
331 TestMountStart/serial/VerifyMountFirst 0.29
332 TestMountStart/serial/StartWithMountSecond 9.38
333 TestMountStart/serial/VerifyMountSecond 0.28
334 TestMountStart/serial/DeleteFirst 1.57
335 TestMountStart/serial/VerifyMountPostDelete 0.28
336 TestMountStart/serial/Stop 1.3
337 TestMountStart/serial/RestartStopped 10.18
338 TestMountStart/serial/VerifyMountPostStop 0.28
341 TestMultiNode/serial/FreshStart2Nodes 83.35
342 TestMultiNode/serial/DeployApp2Nodes 8.23
343 TestMultiNode/serial/PingHostFrom2Pods 0.88
344 TestMultiNode/serial/AddNode 33.59
345 TestMultiNode/serial/MultiNodeLabels 0.06
346 TestMultiNode/serial/ProfileList 0.66
347 TestMultiNode/serial/CopyFile 10.08
348 TestMultiNode/serial/StopNode 2.28
349 TestMultiNode/serial/StartAfterStop 8.44
350 TestMultiNode/serial/RestartKeepsNodes 75.05
351 TestMultiNode/serial/DeleteNode 5.32
352 TestMultiNode/serial/StopMultiNode 21.93
353 TestMultiNode/serial/RestartMultiNode 50.27
354 TestMultiNode/serial/ValidateNameConflict 28.61
361 TestScheduledStopUnix 99.53
362 TestSkaffold 114.03
364 TestInsufficientStorage 12.35
365 TestRunningBinaryUpgrade 358.97
368 TestMissingContainerUpgrade 86.15
369 TestStoppedBinaryUpgrade/Setup 5.64
371 TestPause/serial/Start 80.97
372 TestStoppedBinaryUpgrade/Upgrade 364.18
373 TestPause/serial/SecondStartNoReconfiguration 36.99
382 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
383 TestNoKubernetes/serial/StartWithK8s 25.67
384 TestPause/serial/Pause 0.49
385 TestPause/serial/VerifyStatus 0.31
386 TestPause/serial/Unpause 0.52
387 TestPause/serial/PauseAgain 0.68
388 TestPause/serial/DeletePaused 2.25
389 TestPause/serial/VerifyDeletedResources 28.45
390 TestNoKubernetes/serial/StartWithStopK8s 16.06
402 TestNoKubernetes/serial/Start 8.7
403 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
404 TestNoKubernetes/serial/VerifyK8sNotRunning 0.29
405 TestNoKubernetes/serial/ProfileList 31.65
406 TestPreload/Start-NoPreload-PullImage 91.38
407 TestNoKubernetes/serial/Stop 1.47
408 TestNoKubernetes/serial/StartNoArgs 9.7
409 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.3
410 TestPreload/Restart-With-Preload-Check-User-Image 47.96
412 TestStoppedBinaryUpgrade/MinikubeLogs 0.81
414 TestStartStop/group/old-k8s-version/serial/FirstStart 80.67
417 TestStartStop/group/old-k8s-version/serial/DeployApp 12.28
418 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.9
419 TestStartStop/group/old-k8s-version/serial/Stop 10.94
420 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
421 TestStartStop/group/old-k8s-version/serial/SecondStart 52.2
422 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
423 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
424 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
425 TestStartStop/group/old-k8s-version/serial/Pause 2.9
427 TestStartStop/group/embed-certs/serial/FirstStart 68.38
429 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 39.21
430 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.28
431 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.92
432 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.05
433 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
434 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 54.65
435 TestStartStop/group/embed-certs/serial/DeployApp 12.25
436 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.8
437 TestStartStop/group/embed-certs/serial/Stop 11.02
438 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
439 TestStartStop/group/embed-certs/serial/SecondStart 49.54
440 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
441 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
442 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
443 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.69
446 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
447 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
448 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
449 TestStartStop/group/embed-certs/serial/Pause 2.6
450 TestNetworkPlugins/group/auto/Start 64.96
451 TestNetworkPlugins/group/auto/KubeletFlags 0.3
452 TestNetworkPlugins/group/auto/NetCatPod 10.17
453 TestNetworkPlugins/group/auto/DNS 0.13
454 TestNetworkPlugins/group/auto/Localhost 0.11
455 TestNetworkPlugins/group/auto/HairPin 0.11
456 TestNetworkPlugins/group/kindnet/Start 50.35
457 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
458 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
459 TestNetworkPlugins/group/kindnet/NetCatPod 8.16
460 TestNetworkPlugins/group/kindnet/DNS 0.14
461 TestNetworkPlugins/group/kindnet/Localhost 0.11
462 TestNetworkPlugins/group/kindnet/HairPin 0.12
463 TestNetworkPlugins/group/calico/Start 67
466 TestNetworkPlugins/group/calico/ControllerPod 6.01
467 TestNetworkPlugins/group/calico/KubeletFlags 0.29
468 TestNetworkPlugins/group/calico/NetCatPod 10.17
469 TestNetworkPlugins/group/calico/DNS 0.15
470 TestNetworkPlugins/group/calico/Localhost 0.12
471 TestNetworkPlugins/group/calico/HairPin 0.11
472 TestNetworkPlugins/group/custom-flannel/Start 45.27
473 TestStartStop/group/no-preload/serial/Stop 1.32
474 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
476 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
477 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.2
478 TestNetworkPlugins/group/custom-flannel/DNS 0.14
479 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
480 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
481 TestNetworkPlugins/group/false/Start 66.92
482 TestNetworkPlugins/group/enable-default-cni/Start 63.81
483 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
484 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.18
485 TestNetworkPlugins/group/false/KubeletFlags 0.34
486 TestNetworkPlugins/group/false/NetCatPod 10.2
487 TestNetworkPlugins/group/enable-default-cni/DNS 0.13
488 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
489 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
490 TestNetworkPlugins/group/false/DNS 0.14
491 TestNetworkPlugins/group/false/Localhost 0.12
492 TestNetworkPlugins/group/false/HairPin 0.11
493 TestNetworkPlugins/group/flannel/Start 45.35
494 TestNetworkPlugins/group/bridge/Start 42.73
495 TestStartStop/group/newest-cni/serial/DeployApp 0
497 TestNetworkPlugins/group/flannel/ControllerPod 6.01
498 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
499 TestNetworkPlugins/group/bridge/NetCatPod 10.18
500 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
501 TestNetworkPlugins/group/flannel/NetCatPod 8.17
502 TestNetworkPlugins/group/bridge/DNS 0.14
503 TestNetworkPlugins/group/bridge/Localhost 0.11
504 TestNetworkPlugins/group/bridge/HairPin 0.12
505 TestNetworkPlugins/group/flannel/DNS 0.14
506 TestNetworkPlugins/group/flannel/Localhost 0.12
507 TestNetworkPlugins/group/flannel/HairPin 0.12
508 TestNetworkPlugins/group/kubenet/Start 66.93
509 TestPreload/PreloadSrc/gcs 14.72
510 TestPreload/PreloadSrc/github 19.27
511 TestStartStop/group/newest-cni/serial/Stop 1.35
512 TestPreload/PreloadSrc/gcs-cached 0.68
513 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
515 TestNetworkPlugins/group/kubenet/KubeletFlags 0.31
516 TestNetworkPlugins/group/kubenet/NetCatPod 9.21
517 TestNetworkPlugins/group/kubenet/DNS 0.14
518 TestNetworkPlugins/group/kubenet/Localhost 0.11
519 TestNetworkPlugins/group/kubenet/HairPin 0.13
521 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
522 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
523 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
x
+
TestDownloadOnly/v1.28.0/json-events (29.68s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-726442 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-726442 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (29.682716824s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (29.68s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1222 22:32:39.183923   75803 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
I1222 22:32:39.184030   75803 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-726442
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-726442: exit status 85 (68.966449ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                     ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-726442 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-726442 │ jenkins │ v1.37.0 │ 22 Dec 25 22:32 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 22:32:09
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 22:32:09.553706   75815 out.go:360] Setting OutFile to fd 1 ...
	I1222 22:32:09.553804   75815 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 22:32:09.553813   75815 out.go:374] Setting ErrFile to fd 2...
	I1222 22:32:09.553818   75815 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 22:32:09.554061   75815 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-72233/.minikube/bin
	W1222 22:32:09.554179   75815 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22301-72233/.minikube/config/config.json: open /home/jenkins/minikube-integration/22301-72233/.minikube/config/config.json: no such file or directory
	I1222 22:32:09.554677   75815 out.go:368] Setting JSON to true
	I1222 22:32:09.555662   75815 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":8070,"bootTime":1766434660,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1222 22:32:09.555714   75815 start.go:143] virtualization: kvm guest
	I1222 22:32:09.559060   75815 out.go:99] [download-only-726442] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1222 22:32:09.559169   75815 notify.go:221] Checking for updates...
	W1222 22:32:09.559173   75815 preload.go:372] Failed to list preload files: open /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball: no such file or directory
	I1222 22:32:09.560266   75815 out.go:171] MINIKUBE_LOCATION=22301
	I1222 22:32:09.561817   75815 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 22:32:09.562950   75815 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1222 22:32:09.563883   75815 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-72233/.minikube
	I1222 22:32:09.565012   75815 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1222 22:32:09.566899   75815 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1222 22:32:09.567130   75815 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 22:32:09.590194   75815 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1222 22:32:09.590305   75815 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 22:32:09.780689   75815 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:63 SystemTime:2025-12-22 22:32:09.770445972 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1222 22:32:09.780801   75815 docker.go:319] overlay module found
	I1222 22:32:09.782249   75815 out.go:99] Using the docker driver based on user configuration
	I1222 22:32:09.782284   75815 start.go:309] selected driver: docker
	I1222 22:32:09.782293   75815 start.go:928] validating driver "docker" against <nil>
	I1222 22:32:09.782383   75815 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 22:32:09.835654   75815 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:63 SystemTime:2025-12-22 22:32:09.826438386 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1222 22:32:09.835803   75815 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1222 22:32:09.836304   75815 start_flags.go:417] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1222 22:32:09.836450   75815 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1222 22:32:09.837876   75815 out.go:171] Using Docker driver with root privileges
	I1222 22:32:09.838716   75815 cni.go:84] Creating CNI manager for ""
	I1222 22:32:09.838778   75815 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1222 22:32:09.838789   75815 start_flags.go:342] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1222 22:32:09.838856   75815 start.go:353] cluster config:
	{Name:download-only-726442 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-726442 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 22:32:09.839757   75815 out.go:99] Starting "download-only-726442" primary control-plane node in "download-only-726442" cluster
	I1222 22:32:09.839769   75815 cache.go:134] Beginning downloading kic base image for docker with docker
	I1222 22:32:09.840508   75815 out.go:99] Pulling base image v0.0.48-1766394456-22288 ...
	I1222 22:32:09.840533   75815 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1222 22:32:09.840648   75815 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local docker daemon
	I1222 22:32:09.857040   75815 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 to local cache
	I1222 22:32:09.857261   75815 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local cache directory
	I1222 22:32:09.857368   75815 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 to local cache
	I1222 22:32:10.272503   75815 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	I1222 22:32:10.272559   75815 cache.go:65] Caching tarball of preloaded images
	I1222 22:32:10.272758   75815 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1222 22:32:10.274398   75815 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1222 22:32:10.274419   75815 preload.go:269] Downloading preload from https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	I1222 22:32:10.274425   75815 preload.go:336] getting checksum for preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4 from gcs api...
	I1222 22:32:10.427242   75815 preload.go:313] Got checksum from GCS API "8a955be835827bc584bcce0658a7fcc9"
	I1222 22:32:10.427406   75815 download.go:114] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4?checksum=md5:8a955be835827bc584bcce0658a7fcc9 -> /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	I1222 22:32:24.097185   75815 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on docker
	I1222 22:32:24.097616   75815 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/download-only-726442/config.json ...
	I1222 22:32:24.097652   75815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/download-only-726442/config.json: {Name:mkdcae1ef684026048bd8831b9d3ee7703add25e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1222 22:32:24.097861   75815 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1222 22:32:24.098090   75815 download.go:114] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/22301-72233/.minikube/cache/linux/amd64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-726442 host does not exist
	  To start a cluster, run: "minikube start -p download-only-726442"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-726442
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/json-events (11.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-350597 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-350597 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=docker --driver=docker  --container-runtime=docker: (11.339430682s)
--- PASS: TestDownloadOnly/v1.34.3/json-events (11.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/preload-exists
I1222 22:32:50.958193   75803 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime docker
I1222 22:32:50.958230   75803 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-350597
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-350597: exit status 85 (68.330774ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                     ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-726442 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-726442 │ jenkins │ v1.37.0 │ 22 Dec 25 22:32 UTC │                     │
	│ delete  │ --all                                                                                                                                                                         │ minikube             │ jenkins │ v1.37.0 │ 22 Dec 25 22:32 UTC │ 22 Dec 25 22:32 UTC │
	│ delete  │ -p download-only-726442                                                                                                                                                       │ download-only-726442 │ jenkins │ v1.37.0 │ 22 Dec 25 22:32 UTC │ 22 Dec 25 22:32 UTC │
	│ start   │ -o=json --download-only -p download-only-350597 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-350597 │ jenkins │ v1.37.0 │ 22 Dec 25 22:32 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 22:32:39
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 22:32:39.671397   76254 out.go:360] Setting OutFile to fd 1 ...
	I1222 22:32:39.671686   76254 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 22:32:39.671696   76254 out.go:374] Setting ErrFile to fd 2...
	I1222 22:32:39.671701   76254 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 22:32:39.671911   76254 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-72233/.minikube/bin
	I1222 22:32:39.672418   76254 out.go:368] Setting JSON to true
	I1222 22:32:39.673282   76254 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":8100,"bootTime":1766434660,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1222 22:32:39.673355   76254 start.go:143] virtualization: kvm guest
	I1222 22:32:39.675347   76254 out.go:99] [download-only-350597] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1222 22:32:39.675539   76254 notify.go:221] Checking for updates...
	I1222 22:32:39.676623   76254 out.go:171] MINIKUBE_LOCATION=22301
	I1222 22:32:39.677985   76254 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 22:32:39.679266   76254 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1222 22:32:39.680365   76254 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-72233/.minikube
	I1222 22:32:39.681491   76254 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1222 22:32:39.683545   76254 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1222 22:32:39.683876   76254 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 22:32:39.706527   76254 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1222 22:32:39.706707   76254 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 22:32:39.760271   76254 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:true NGoroutines:45 SystemTime:2025-12-22 22:32:39.751437783 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1222 22:32:39.760379   76254 docker.go:319] overlay module found
	I1222 22:32:39.762112   76254 out.go:99] Using the docker driver based on user configuration
	I1222 22:32:39.762153   76254 start.go:309] selected driver: docker
	I1222 22:32:39.762165   76254 start.go:928] validating driver "docker" against <nil>
	I1222 22:32:39.762269   76254 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 22:32:39.818872   76254 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:true NGoroutines:45 SystemTime:2025-12-22 22:32:39.809567437 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1222 22:32:39.819024   76254 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1222 22:32:39.819511   76254 start_flags.go:417] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1222 22:32:39.819684   76254 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1222 22:32:39.821487   76254 out.go:171] Using Docker driver with root privileges
	I1222 22:32:39.822468   76254 cni.go:84] Creating CNI manager for ""
	I1222 22:32:39.822529   76254 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1222 22:32:39.822537   76254 start_flags.go:342] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1222 22:32:39.822608   76254 start.go:353] cluster config:
	{Name:download-only-350597 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:download-only-350597 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 22:32:39.823956   76254 out.go:99] Starting "download-only-350597" primary control-plane node in "download-only-350597" cluster
	I1222 22:32:39.823981   76254 cache.go:134] Beginning downloading kic base image for docker with docker
	I1222 22:32:39.825018   76254 out.go:99] Pulling base image v0.0.48-1766394456-22288 ...
	I1222 22:32:39.825047   76254 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime docker
	I1222 22:32:39.825182   76254 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local docker daemon
	I1222 22:32:39.841606   76254 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 to local cache
	I1222 22:32:39.841764   76254 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local cache directory
	I1222 22:32:39.841782   76254 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local cache directory, skipping pull
	I1222 22:32:39.841786   76254 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 exists in cache, skipping pull
	I1222 22:32:39.841797   76254 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 as a tarball
	I1222 22:32:39.977001   76254 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4
	I1222 22:32:39.977073   76254 cache.go:65] Caching tarball of preloaded images
	I1222 22:32:39.977313   76254 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime docker
	I1222 22:32:39.979365   76254 out.go:99] Downloading Kubernetes v1.34.3 preload ...
	I1222 22:32:39.979385   76254 preload.go:269] Downloading preload from https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4
	I1222 22:32:39.979391   76254 preload.go:336] getting checksum for preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4 from gcs api...
	I1222 22:32:40.129824   76254 preload.go:313] Got checksum from GCS API "2968966fc29eb8b579cb5fae535bf3b1"
	I1222 22:32:40.129880   76254 download.go:114] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4?checksum=md5:2968966fc29eb8b579cb5fae535bf3b1 -> /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-350597 host does not exist
	  To start a cluster, run: "minikube start -p download-only-350597"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.3/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.3/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-350597
--- PASS: TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/json-events (11.87s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-407127 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-407127 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=docker --driver=docker  --container-runtime=docker: (11.86640241s)
--- PASS: TestDownloadOnly/v1.35.0-rc.1/json-events (11.87s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/preload-exists
I1222 22:33:03.250211   75803 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
I1222 22:33:03.250254   75803 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-rc.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-407127
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-407127: exit status 85 (71.172112ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                        ARGS                                                                                        │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-726442 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker      │ download-only-726442 │ jenkins │ v1.37.0 │ 22 Dec 25 22:32 UTC │                     │
	│ delete  │ --all                                                                                                                                                                              │ minikube             │ jenkins │ v1.37.0 │ 22 Dec 25 22:32 UTC │ 22 Dec 25 22:32 UTC │
	│ delete  │ -p download-only-726442                                                                                                                                                            │ download-only-726442 │ jenkins │ v1.37.0 │ 22 Dec 25 22:32 UTC │ 22 Dec 25 22:32 UTC │
	│ start   │ -o=json --download-only -p download-only-350597 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=docker --driver=docker  --container-runtime=docker      │ download-only-350597 │ jenkins │ v1.37.0 │ 22 Dec 25 22:32 UTC │                     │
	│ delete  │ --all                                                                                                                                                                              │ minikube             │ jenkins │ v1.37.0 │ 22 Dec 25 22:32 UTC │ 22 Dec 25 22:32 UTC │
	│ delete  │ -p download-only-350597                                                                                                                                                            │ download-only-350597 │ jenkins │ v1.37.0 │ 22 Dec 25 22:32 UTC │ 22 Dec 25 22:32 UTC │
	│ start   │ -o=json --download-only -p download-only-407127 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-407127 │ jenkins │ v1.37.0 │ 22 Dec 25 22:32 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/22 22:32:51
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1222 22:32:51.435163   76630 out.go:360] Setting OutFile to fd 1 ...
	I1222 22:32:51.435455   76630 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 22:32:51.435466   76630 out.go:374] Setting ErrFile to fd 2...
	I1222 22:32:51.435472   76630 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 22:32:51.435709   76630 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-72233/.minikube/bin
	I1222 22:32:51.436208   76630 out.go:368] Setting JSON to true
	I1222 22:32:51.437068   76630 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":8111,"bootTime":1766434660,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1222 22:32:51.437126   76630 start.go:143] virtualization: kvm guest
	I1222 22:32:51.438824   76630 out.go:99] [download-only-407127] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1222 22:32:51.438953   76630 notify.go:221] Checking for updates...
	I1222 22:32:51.440006   76630 out.go:171] MINIKUBE_LOCATION=22301
	I1222 22:32:51.441102   76630 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 22:32:51.442228   76630 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1222 22:32:51.443540   76630 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-72233/.minikube
	I1222 22:32:51.444699   76630 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1222 22:32:51.446717   76630 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1222 22:32:51.446934   76630 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 22:32:51.468388   76630 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1222 22:32:51.468510   76630 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 22:32:51.523622   76630 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:true NGoroutines:45 SystemTime:2025-12-22 22:32:51.514697092 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1222 22:32:51.523727   76630 docker.go:319] overlay module found
	I1222 22:32:51.525117   76630 out.go:99] Using the docker driver based on user configuration
	I1222 22:32:51.525154   76630 start.go:309] selected driver: docker
	I1222 22:32:51.525160   76630 start.go:928] validating driver "docker" against <nil>
	I1222 22:32:51.525272   76630 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 22:32:51.582584   76630 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:true NGoroutines:45 SystemTime:2025-12-22 22:32:51.573319932 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1222 22:32:51.582793   76630 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1222 22:32:51.583269   76630 start_flags.go:417] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1222 22:32:51.583446   76630 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1222 22:32:51.584956   76630 out.go:171] Using Docker driver with root privileges
	I1222 22:32:51.586039   76630 cni.go:84] Creating CNI manager for ""
	I1222 22:32:51.586107   76630 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1222 22:32:51.586124   76630 start_flags.go:342] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1222 22:32:51.586199   76630 start.go:353] cluster config:
	{Name:download-only-407127 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:download-only-407127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 22:32:51.587267   76630 out.go:99] Starting "download-only-407127" primary control-plane node in "download-only-407127" cluster
	I1222 22:32:51.587287   76630 cache.go:134] Beginning downloading kic base image for docker with docker
	I1222 22:32:51.588200   76630 out.go:99] Pulling base image v0.0.48-1766394456-22288 ...
	I1222 22:32:51.588229   76630 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1222 22:32:51.588327   76630 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local docker daemon
	I1222 22:32:51.604223   76630 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 to local cache
	I1222 22:32:51.604388   76630 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local cache directory
	I1222 22:32:51.604411   76630 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 in local cache directory, skipping pull
	I1222 22:32:51.604416   76630 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 exists in cache, skipping pull
	I1222 22:32:51.604428   76630 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 as a tarball
	I1222 22:32:52.149346   76630 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-rc.1/preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4
	I1222 22:32:52.149400   76630 cache.go:65] Caching tarball of preloaded images
	I1222 22:32:52.149647   76630 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1222 22:32:52.151345   76630 out.go:99] Downloading Kubernetes v1.35.0-rc.1 preload ...
	I1222 22:32:52.151369   76630 preload.go:269] Downloading preload from https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-rc.1/preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4
	I1222 22:32:52.151376   76630 preload.go:336] getting checksum for preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4 from gcs api...
	I1222 22:32:52.305159   76630 preload.go:313] Got checksum from GCS API "69672a26de652c41c080c5ec079f9718"
	I1222 22:32:52.305219   76630 download.go:114] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-rc.1/preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4?checksum=md5:69672a26de652c41c080c5ec079f9718 -> /home/jenkins/minikube-integration/22301-72233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-407127 host does not exist
	  To start a cluster, run: "minikube start -p download-only-407127"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-rc.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-rc.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-407127
--- PASS: TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.39s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-407263 --alsologtostderr --driver=docker  --container-runtime=docker
helpers_test.go:176: Cleaning up "download-docker-407263" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-407263
--- PASS: TestDownloadOnlyKic (0.39s)

                                                
                                    
x
+
TestBinaryMirror (0.81s)

                                                
                                                
=== RUN   TestBinaryMirror
I1222 22:33:04.495683   75803 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-086441 --alsologtostderr --binary-mirror http://127.0.0.1:45153 --driver=docker  --container-runtime=docker
helpers_test.go:176: Cleaning up "binary-mirror-086441" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-086441
--- PASS: TestBinaryMirror (0.81s)

                                                
                                    
x
+
TestOffline (82.63s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-501252 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-501252 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=docker: (1m20.161675768s)
helpers_test.go:176: Cleaning up "offline-docker-501252" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-501252
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-501252: (2.469492554s)
--- PASS: TestOffline (82.63s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-268945
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-268945: exit status 85 (65.270006ms)

                                                
                                                
-- stdout --
	* Profile "addons-268945" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-268945"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-268945
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-268945: exit status 85 (64.732786ms)

                                                
                                                
-- stdout --
	* Profile "addons-268945" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-268945"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (144.89s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-268945 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-268945 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m24.886390174s)
--- PASS: TestAddons/Setup (144.89s)

                                                
                                    
x
+
TestAddons/serial/Volcano (41.98s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:878: volcano-admission stabilized in 14.641246ms
addons_test.go:870: volcano-scheduler stabilized in 14.692984ms
addons_test.go:886: volcano-controller stabilized in 14.730872ms
addons_test.go:892: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-scheduler-76c996c8bf-blp75" [ced58d36-33a9-43fa-ac60-0a2c2427cea4] Running
addons_test.go:892: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.003639727s
addons_test.go:896: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-admission-6c447bd768-frkj9" [1a83ef51-dacc-4bcf-b9f7-db00be65e704] Running
addons_test.go:896: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.00348661s
addons_test.go:900: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-controllers-6fd4f85cb8-lctlt" [e4d55082-bfb0-4fec-92e0-3e9a8654fa21] Running
addons_test.go:900: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.002613191s
addons_test.go:905: (dbg) Run:  kubectl --context addons-268945 delete -n volcano-system job volcano-admission-init
addons_test.go:911: (dbg) Run:  kubectl --context addons-268945 create -f testdata/vcjob.yaml
addons_test.go:919: (dbg) Run:  kubectl --context addons-268945 get vcjob -n my-volcano
addons_test.go:937: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:353: "test-job-nginx-0" [13748062-f7c9-440c-8f58-75e180478277] Pending
helpers_test.go:353: "test-job-nginx-0" [13748062-f7c9-440c-8f58-75e180478277] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "test-job-nginx-0" [13748062-f7c9-440c-8f58-75e180478277] Running
addons_test.go:937: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 15.003969409s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-268945 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-268945 addons disable volcano --alsologtostderr -v=1: (11.634592097s)
--- PASS: TestAddons/serial/Volcano (41.98s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-268945 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-268945 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.48s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-268945 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-268945 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [813127de-172b-47c6-ac10-4aa5a1b851b0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [813127de-172b-47c6-ac10-4aa5a1b851b0] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.003116s
addons_test.go:696: (dbg) Run:  kubectl --context addons-268945 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-268945 describe sa gcp-auth-test
addons_test.go:746: (dbg) Run:  kubectl --context addons-268945 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.48s)

                                                
                                    
x
+
TestAddons/parallel/Registry (48.68s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 4.551006ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-vqpg6" [3d02fd39-7eca-4a67-8be1-e7c56c3d799f] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.002526868s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-5f7rf" [280d21b3-220a-4ac2-8709-0e0662231cc5] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00376645s
addons_test.go:394: (dbg) Run:  kubectl --context addons-268945 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-268945 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-268945 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (36.615036951s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-amd64 -p addons-268945 ip
2025/12/22 22:37:20 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-268945 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (48.68s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.59s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 3.160056ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-268945
addons_test.go:334: (dbg) Run:  kubectl --context addons-268945 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-268945 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.59s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (50.7s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-268945 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-268945 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-268945 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [e55f3371-393f-4b51-9511-7ce488aa99fd] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [e55f3371-393f-4b51-9511-7ce488aa99fd] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 40.003305508s
I1222 22:37:19.446632   75803 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-amd64 -p addons-268945 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:290: (dbg) Run:  kubectl --context addons-268945 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-amd64 -p addons-268945 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-268945 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-268945 addons disable ingress-dns --alsologtostderr -v=1: (1.499170717s)
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-268945 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-268945 addons disable ingress --alsologtostderr -v=1: (7.669208701s)
--- PASS: TestAddons/parallel/Ingress (50.70s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.64s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-kvd5x" [adf510f9-e78f-4671-8145-3e29f322391f] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003080541s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-268945 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-268945 addons disable inspektor-gadget --alsologtostderr -v=1: (5.639632894s)
--- PASS: TestAddons/parallel/InspektorGadget (10.64s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.63s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 4.713747ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-sfssb" [0aa17c37-0ab7-410d-9c72-392450d3ac04] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.002697535s
addons_test.go:465: (dbg) Run:  kubectl --context addons-268945 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-268945 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.63s)

                                                
                                    
x
+
TestAddons/parallel/CSI (58.18s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1222 22:37:21.156926   75803 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1222 22:37:21.160848   75803 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1222 22:37:21.160876   75803 kapi.go:107] duration metric: took 3.959726ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 3.971956ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-268945 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-268945 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-268945 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-268945 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-268945 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-268945 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-268945 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-268945 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-268945 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-268945 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-268945 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-268945 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-268945 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-268945 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-268945 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-268945 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [de2575f9-3139-4432-8990-31459e51afd2] Pending
helpers_test.go:353: "task-pv-pod" [de2575f9-3139-4432-8990-31459e51afd2] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 6.003360842s
addons_test.go:574: (dbg) Run:  kubectl --context addons-268945 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-268945 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-268945 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-268945 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-268945 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-268945 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-268945 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-268945 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-268945 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-268945 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-268945 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-268945 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-268945 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-268945 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-268945 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-268945 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-268945 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-268945 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-268945 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-268945 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-268945 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-268945 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-268945 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-268945 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-268945 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-268945 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-268945 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-268945 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [4794fc56-ab1d-4d43-9a3f-4f3c55789dd4] Pending
helpers_test.go:353: "task-pv-pod-restore" [4794fc56-ab1d-4d43-9a3f-4f3c55789dd4] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [4794fc56-ab1d-4d43-9a3f-4f3c55789dd4] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.005889395s
addons_test.go:616: (dbg) Run:  kubectl --context addons-268945 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-268945 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-268945 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-268945 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-268945 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-268945 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.55555527s)
--- PASS: TestAddons/parallel/CSI (58.18s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (46.4s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-268945 --alsologtostderr -v=1
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:353: "headlamp-7f5bcd4678-jhh2c" [71cae3ee-d6c3-4b6b-9087-4eb2a2a657ae] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-7f5bcd4678-jhh2c" [71cae3ee-d6c3-4b6b-9087-4eb2a2a657ae] Running
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 40.002761899s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-268945 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-268945 addons disable headlamp --alsologtostderr -v=1: (5.610082005s)
--- PASS: TestAddons/parallel/Headlamp (46.40s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.47s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-85df47b6f4-6j479" [ec9c0106-e24b-4ac1-938a-7fc464c3ba30] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003779777s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-268945 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.47s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (35.12s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-268945 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-268945 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-268945 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-268945 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-268945 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-268945 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-268945 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-268945 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-268945 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-268945 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-268945 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-268945 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-268945 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-268945 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-268945 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-268945 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-268945 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-268945 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-268945 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-268945 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-268945 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-268945 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-268945 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-268945 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-268945 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-268945 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-268945 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-268945 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-268945 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-268945 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-268945 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [79b9f09d-b5bd-402a-852b-58c62f147d8d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [79b9f09d-b5bd-402a-852b-58c62f147d8d] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [79b9f09d-b5bd-402a-852b-58c62f147d8d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.00347122s
addons_test.go:969: (dbg) Run:  kubectl --context addons-268945 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-amd64 -p addons-268945 ssh "cat /opt/local-path-provisioner/pvc-286bdfb6-4b24-402c-9484-c1735b712754_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-268945 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-268945 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-268945 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (35.12s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.49s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-xgkx2" [f2c7d755-f790-412f-9a36-226a3188c135] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003080693s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-268945 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.49s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.63s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-7896b7cb5b-f4v77" [4c52e810-543e-4894-aab0-cbf929483508] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003785047s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-268945 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-268945 addons disable yakd --alsologtostderr -v=1: (5.627738497s)
--- PASS: TestAddons/parallel/Yakd (10.63s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.43s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:353: "amd-gpu-device-plugin-n2t6v" [5b1a6a0b-13f7-4d6b-9cec-9d5ffe02d3ba] Running
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.00308743s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-268945 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (5.43s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.27s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-268945
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-268945: (10.984627103s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-268945
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-268945
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-268945
--- PASS: TestAddons/StoppedEnableDisable (11.27s)

                                                
                                    
x
+
TestCertOptions (31.45s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-646336 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-646336 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (28.553621253s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-646336 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-646336 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-646336 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-646336" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-646336
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-646336: (2.235985564s)
--- PASS: TestCertOptions (31.45s)

                                                
                                    
x
+
TestCertExpiration (237.22s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-628145 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-628145 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=docker: (28.354020249s)
E1222 23:45:00.034209   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/skaffold-356784/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-628145 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-628145 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (26.516704215s)
helpers_test.go:176: Cleaning up "cert-expiration-628145" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-628145
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-628145: (2.348877529s)
--- PASS: TestCertExpiration (237.22s)

                                                
                                    
x
+
TestDockerFlags (29.25s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-071533 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-071533 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (26.32402883s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-071533 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-071533 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:176: Cleaning up "docker-flags-071533" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-071533
E1222 23:45:31.033549   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/addons-268945/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-071533: (2.281357209s)
--- PASS: TestDockerFlags (29.25s)

                                                
                                    
x
+
TestForceSystemdFlag (30.1s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-280058 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E1222 23:40:31.032893   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/addons-268945/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-280058 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (27.448177825s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-280058 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:176: Cleaning up "force-systemd-flag-280058" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-280058
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-280058: (2.284843833s)
--- PASS: TestForceSystemdFlag (30.10s)

                                                
                                    
x
+
TestForceSystemdEnv (27.59s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-972338 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-972338 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (25.029400912s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-972338 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:176: Cleaning up "force-systemd-env-972338" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-972338
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-972338: (2.198497185s)
--- PASS: TestForceSystemdEnv (27.59s)

                                                
                                    
x
+
TestErrorSpam/setup (26.08s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-659603 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-659603 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-659603 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-659603 --driver=docker  --container-runtime=docker: (26.075567437s)
--- PASS: TestErrorSpam/setup (26.08s)

                                                
                                    
x
+
TestErrorSpam/start (0.63s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-659603 --log_dir /tmp/nospam-659603 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-659603 --log_dir /tmp/nospam-659603 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-659603 --log_dir /tmp/nospam-659603 start --dry-run
--- PASS: TestErrorSpam/start (0.63s)

                                                
                                    
x
+
TestErrorSpam/status (0.95s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-659603 --log_dir /tmp/nospam-659603 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-659603 --log_dir /tmp/nospam-659603 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-659603 --log_dir /tmp/nospam-659603 status
--- PASS: TestErrorSpam/status (0.95s)

                                                
                                    
x
+
TestErrorSpam/pause (1.2s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-659603 --log_dir /tmp/nospam-659603 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-659603 --log_dir /tmp/nospam-659603 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-659603 --log_dir /tmp/nospam-659603 pause
--- PASS: TestErrorSpam/pause (1.20s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.62s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-659603 --log_dir /tmp/nospam-659603 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-659603 --log_dir /tmp/nospam-659603 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-659603 --log_dir /tmp/nospam-659603 unpause
--- PASS: TestErrorSpam/unpause (1.62s)

                                                
                                    
x
+
TestErrorSpam/stop (11.12s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-659603 --log_dir /tmp/nospam-659603 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-659603 --log_dir /tmp/nospam-659603 stop: (10.921067723s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-659603 --log_dir /tmp/nospam-659603 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-659603 --log_dir /tmp/nospam-659603 stop
--- PASS: TestErrorSpam/stop (11.12s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1865: local sync path: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/test/nested/copy/75803/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (38.8s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2244: (dbg) Run:  out/minikube-linux-amd64 start -p functional-580825 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2244: (dbg) Done: out/minikube-linux-amd64 start -p functional-580825 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (38.797636035s)
--- PASS: TestFunctional/serial/StartWithProxy (38.80s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (40.03s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1222 22:39:55.280325   75803 config.go:182] Loaded profile config "functional-580825": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-580825 --alsologtostderr -v=8
E1222 22:40:31.032914   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/addons-268945/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 22:40:31.038229   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/addons-268945/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 22:40:31.049103   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/addons-268945/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 22:40:31.069218   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/addons-268945/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 22:40:31.109557   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/addons-268945/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 22:40:31.189864   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/addons-268945/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 22:40:31.350327   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/addons-268945/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 22:40:31.671061   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/addons-268945/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 22:40:32.312209   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/addons-268945/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 22:40:33.592362   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/addons-268945/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-580825 --alsologtostderr -v=8: (40.030963088s)
functional_test.go:678: soft start took 40.031747229s for "functional-580825" cluster.
I1222 22:40:35.311754   75803 config.go:182] Loaded profile config "functional-580825": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
--- PASS: TestFunctional/serial/SoftStart (40.03s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-580825 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1069: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 cache add registry.k8s.io/pause:3.1
E1222 22:40:36.153542   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/addons-268945/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1069: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 cache add registry.k8s.io/pause:3.3
functional_test.go:1069: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.72s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1097: (dbg) Run:  docker build -t minikube-local-cache-test:functional-580825 /tmp/TestFunctionalserialCacheCmdcacheadd_local247564560/001
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 cache add minikube-local-cache-test:functional-580825
functional_test.go:1109: (dbg) Done: out/minikube-linux-amd64 -p functional-580825 cache add minikube-local-cache-test:functional-580825: (1.404965757s)
functional_test.go:1114: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 cache delete minikube-local-cache-test:functional-580825
functional_test.go:1103: (dbg) Run:  docker rmi minikube-local-cache-test:functional-580825
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.72s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1122: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1130: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1144: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1167: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-580825 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (294.578801ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 cache reload
functional_test.go:1183: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.36s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
E1222 22:40:41.274099   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/addons-268945/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 kubectl -- --context functional-580825 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-580825 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (42.18s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-580825 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1222 22:40:51.514664   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/addons-268945/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 22:41:11.994999   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/addons-268945/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-580825 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.177130026s)
functional_test.go:776: restart took 42.177275284s for "functional-580825" cluster.
I1222 22:41:23.775244   75803 config.go:182] Loaded profile config "functional-580825": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
--- PASS: TestFunctional/serial/ExtraConfig (42.18s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-580825 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.01s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1256: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 logs
functional_test.go:1256: (dbg) Done: out/minikube-linux-amd64 -p functional-580825 logs: (1.013150808s)
--- PASS: TestFunctional/serial/LogsCmd (1.01s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.03s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 logs --file /tmp/TestFunctionalserialLogsFileCmd2840318422/001/logs.txt
functional_test.go:1270: (dbg) Done: out/minikube-linux-amd64 -p functional-580825 logs --file /tmp/TestFunctionalserialLogsFileCmd2840318422/001/logs.txt: (1.029620992s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.03s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.77s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2331: (dbg) Run:  kubectl --context functional-580825 apply -f testdata/invalidsvc.yaml
functional_test.go:2345: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-580825
functional_test.go:2345: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-580825: exit status 115 (349.59007ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30492 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2337: (dbg) Run:  kubectl --context functional-580825 delete -f testdata/invalidsvc.yaml
functional_test.go:2337: (dbg) Done: kubectl --context functional-580825 delete -f testdata/invalidsvc.yaml: (1.25661829s)
--- PASS: TestFunctional/serial/InvalidService (4.77s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-580825 config get cpus: exit status 14 (74.150852ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 config set cpus 2
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 config get cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-580825 config get cpus: exit status 14 (71.845396ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 0 -p functional-580825 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 0 -p functional-580825 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 130617: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.98s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:994: (dbg) Run:  out/minikube-linux-amd64 start -p functional-580825 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:994: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-580825 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (163.884419ms)

                                                
                                                
-- stdout --
	* [functional-580825] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22301
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22301-72233/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-72233/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1222 22:42:25.821617  130008 out.go:360] Setting OutFile to fd 1 ...
	I1222 22:42:25.821883  130008 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 22:42:25.821894  130008 out.go:374] Setting ErrFile to fd 2...
	I1222 22:42:25.821898  130008 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 22:42:25.822142  130008 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-72233/.minikube/bin
	I1222 22:42:25.822636  130008 out.go:368] Setting JSON to false
	I1222 22:42:25.823746  130008 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":8686,"bootTime":1766434660,"procs":262,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1222 22:42:25.823796  130008 start.go:143] virtualization: kvm guest
	I1222 22:42:25.825699  130008 out.go:179] * [functional-580825] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1222 22:42:25.826696  130008 out.go:179]   - MINIKUBE_LOCATION=22301
	I1222 22:42:25.826759  130008 notify.go:221] Checking for updates...
	I1222 22:42:25.828968  130008 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 22:42:25.830097  130008 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1222 22:42:25.834818  130008 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-72233/.minikube
	I1222 22:42:25.836077  130008 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1222 22:42:25.837253  130008 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 22:42:25.838711  130008 config.go:182] Loaded profile config "functional-580825": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1222 22:42:25.839208  130008 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 22:42:25.862909  130008 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1222 22:42:25.862997  130008 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 22:42:25.918795  130008 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-22 22:42:25.909123859 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1222 22:42:25.918929  130008 docker.go:319] overlay module found
	I1222 22:42:25.920446  130008 out.go:179] * Using the docker driver based on existing profile
	I1222 22:42:25.921436  130008 start.go:309] selected driver: docker
	I1222 22:42:25.921452  130008 start.go:928] validating driver "docker" against &{Name:functional-580825 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:functional-580825 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 22:42:25.921581  130008 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 22:42:25.923630  130008 out.go:203] 
	W1222 22:42:25.924737  130008 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1222 22:42:25.925803  130008 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 start -p functional-580825 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1040: (dbg) Run:  out/minikube-linux-amd64 start -p functional-580825 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-580825 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (163.829529ms)

                                                
                                                
-- stdout --
	* [functional-580825] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22301
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22301-72233/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-72233/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1222 22:42:16.901623  128589 out.go:360] Setting OutFile to fd 1 ...
	I1222 22:42:16.901714  128589 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 22:42:16.901718  128589 out.go:374] Setting ErrFile to fd 2...
	I1222 22:42:16.901721  128589 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 22:42:16.901994  128589 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-72233/.minikube/bin
	I1222 22:42:16.902398  128589 out.go:368] Setting JSON to false
	I1222 22:42:16.903419  128589 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":8677,"bootTime":1766434660,"procs":251,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1222 22:42:16.903476  128589 start.go:143] virtualization: kvm guest
	I1222 22:42:16.905332  128589 out.go:179] * [functional-580825] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1222 22:42:16.906548  128589 out.go:179]   - MINIKUBE_LOCATION=22301
	I1222 22:42:16.906631  128589 notify.go:221] Checking for updates...
	I1222 22:42:16.908628  128589 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 22:42:16.909760  128589 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1222 22:42:16.910772  128589 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-72233/.minikube
	I1222 22:42:16.911748  128589 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1222 22:42:16.912704  128589 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 22:42:16.913990  128589 config.go:182] Loaded profile config "functional-580825": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1222 22:42:16.914613  128589 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 22:42:16.939813  128589 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1222 22:42:16.939896  128589 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 22:42:16.997735  128589 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-22 22:42:16.988435097 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1222 22:42:16.997843  128589 docker.go:319] overlay module found
	I1222 22:42:16.999618  128589 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1222 22:42:17.000623  128589 start.go:309] selected driver: docker
	I1222 22:42:17.000640  128589 start.go:928] validating driver "docker" against &{Name:functional-580825 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:functional-580825 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 22:42:17.000749  128589 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 22:42:17.002522  128589 out.go:203] 
	W1222 22:42:17.003514  128589 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1222 22:42:17.004657  128589 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (39.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1641: (dbg) Run:  kubectl --context functional-580825 create deployment hello-node-connect --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1645: (dbg) Run:  kubectl --context functional-580825 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-55dddb6747-vhf2x" [b1fff064-7fc7-4a13-9595-3079c9c9119d] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-55dddb6747-vhf2x" [b1fff064-7fc7-4a13-9595-3079c9c9119d] Running
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 39.004399067s
functional_test.go:1659: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 service hello-node-connect --url
functional_test.go:1665: found endpoint for hello-node-connect: http://192.168.49.2:31977
functional_test.go:1685: http://192.168.49.2:31977: success! body:
Request served by hello-node-connect-55dddb6747-vhf2x

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31977
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (39.52s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1700: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 addons list
functional_test.go:1712: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (60.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [54d377e5-fd5b-4f1e-84a6-74a30095a623] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.00356765s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-580825 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-580825 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-580825 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-580825 apply -f testdata/storage-provisioner/pod.yaml
I1222 22:41:37.023552   75803 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [47fc20f1-0518-4de1-8e86-297d7835bb32] Pending
helpers_test.go:353: "sp-pod" [47fc20f1-0518-4de1-8e86-297d7835bb32] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [47fc20f1-0518-4de1-8e86-297d7835bb32] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 46.002969392s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-580825 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-580825 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-580825 delete -f testdata/storage-provisioner/pod.yaml: (1.277278257s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-580825 apply -f testdata/storage-provisioner/pod.yaml
I1222 22:42:24.535154   75803 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [8f1599f1-d690-4d93-a53f-9b22cec73d95] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [8f1599f1-d690-4d93-a53f-9b22cec73d95] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.004635962s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-580825 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (60.04s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1735: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 ssh "echo hello"
functional_test.go:1752: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 ssh -n functional-580825 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 cp functional-580825:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3550571444/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 ssh -n functional-580825 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 ssh -n functional-580825 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (29.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1803: (dbg) Run:  kubectl --context functional-580825 replace --force -f testdata/mysql.yaml
functional_test.go:1809: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-6bcdcbc558-95tqx" [1e8fae25-a79a-4566-a458-07060b2cd95c] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-6bcdcbc558-95tqx" [1e8fae25-a79a-4566-a458-07060b2cd95c] Running
E1222 22:41:52.955308   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/addons-268945/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1809: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 25.003798686s
functional_test.go:1817: (dbg) Run:  kubectl --context functional-580825 exec mysql-6bcdcbc558-95tqx -- mysql -ppassword -e "show databases;"
functional_test.go:1817: (dbg) Non-zero exit: kubectl --context functional-580825 exec mysql-6bcdcbc558-95tqx -- mysql -ppassword -e "show databases;": exit status 1 (141.757163ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1222 22:41:57.313550   75803 retry.go:84] will retry after 1.3s: exit status 1
functional_test.go:1817: (dbg) Run:  kubectl --context functional-580825 exec mysql-6bcdcbc558-95tqx -- mysql -ppassword -e "show databases;"
functional_test.go:1817: (dbg) Non-zero exit: kubectl --context functional-580825 exec mysql-6bcdcbc558-95tqx -- mysql -ppassword -e "show databases;": exit status 1 (152.593693ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1817: (dbg) Run:  kubectl --context functional-580825 exec mysql-6bcdcbc558-95tqx -- mysql -ppassword -e "show databases;"
functional_test.go:1817: (dbg) Non-zero exit: kubectl --context functional-580825 exec mysql-6bcdcbc558-95tqx -- mysql -ppassword -e "show databases;": exit status 1 (121.393436ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1817: (dbg) Run:  kubectl --context functional-580825 exec mysql-6bcdcbc558-95tqx -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (29.98s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1939: Checking for existence of /etc/test/nested/copy/75803/hosts within VM
functional_test.go:1941: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 ssh "sudo cat /etc/test/nested/copy/75803/hosts"
functional_test.go:1946: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1982: Checking for existence of /etc/ssl/certs/75803.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 ssh "sudo cat /etc/ssl/certs/75803.pem"
functional_test.go:1982: Checking for existence of /usr/share/ca-certificates/75803.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 ssh "sudo cat /usr/share/ca-certificates/75803.pem"
functional_test.go:1982: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/758032.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 ssh "sudo cat /etc/ssl/certs/758032.pem"
functional_test.go:2009: Checking for existence of /usr/share/ca-certificates/758032.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 ssh "sudo cat /usr/share/ca-certificates/758032.pem"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-580825 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2037: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 ssh "sudo systemctl is-active crio"
functional_test.go:2037: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-580825 ssh "sudo systemctl is-active crio": exit status 1 (289.73596ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2298: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:514: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-580825 docker-env) && out/minikube-linux-amd64 status -p functional-580825"
functional_test.go:537: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-580825 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2129: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2129: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2129: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2280: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 version -o=json --components
2025/12/22 22:42:36 [DEBUG] GET http://127.0.0.1:33813/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/Version/components (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (39.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:74: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-580825 /tmp/TestFunctionalparallelMountCmdany-port3428665000/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:108: wrote "test-1766443293067683795" to /tmp/TestFunctionalparallelMountCmdany-port3428665000/001/created-by-test
functional_test_mount_test.go:108: wrote "test-1766443293067683795" to /tmp/TestFunctionalparallelMountCmdany-port3428665000/001/created-by-test-removed-by-pod
functional_test_mount_test.go:108: wrote "test-1766443293067683795" to /tmp/TestFunctionalparallelMountCmdany-port3428665000/001/test-1766443293067683795
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:116: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-580825 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (296.916958ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1222 22:41:33.364892   75803 retry.go:84] will retry after 500ms: exit status 1
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 ssh -- ls -la /mount-9p
functional_test_mount_test.go:134: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 22 22:41 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 22 22:41 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 22 22:41 test-1766443293067683795
functional_test_mount_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 ssh cat /mount-9p/test-1766443293067683795
functional_test_mount_test.go:149: (dbg) Run:  kubectl --context functional-580825 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [74e98428-dad7-4b79-b115-8c93578423c8] Pending
helpers_test.go:353: "busybox-mount" [74e98428-dad7-4b79-b115-8c93578423c8] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [74e98428-dad7-4b79-b115-8c93578423c8] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [74e98428-dad7-4b79-b115-8c93578423c8] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 37.002859505s
functional_test_mount_test.go:170: (dbg) Run:  kubectl --context functional-580825 logs busybox-mount
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:95: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-580825 /tmp/TestFunctionalparallelMountCmdany-port3428665000/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (39.75s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (24.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-580825 create deployment hello-node --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1460: (dbg) Run:  kubectl --context functional-580825 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-f68f7994-qhsl9" [5afc316e-5d30-404a-ab4d-d7555578f0e1] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-f68f7994-qhsl9" [5afc316e-5d30-404a-ab4d-d7555578f0e1] Running
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 24.003663539s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (24.13s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:219: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-580825 /tmp/TestFunctionalparallelMountCmdspecific-port2554122884/001:/mount-9p --alsologtostderr -v=1 --port 39445]
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:249: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-580825 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (301.631221ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1222 22:42:13.121380   75803 retry.go:84] will retry after 400ms: exit status 1 (duplicate log for 39.8s)
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:263: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 ssh -- ls -la /mount-9p
functional_test_mount_test.go:267: guest mount directory contents
total 0
functional_test_mount_test.go:269: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-580825 /tmp/TestFunctionalparallelMountCmdspecific-port2554122884/001:/mount-9p --alsologtostderr -v=1 --port 39445] ...
functional_test_mount_test.go:270: reading mount text
functional_test_mount_test.go:284: done reading mount text
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-580825 ssh "sudo umount -f /mount-9p": exit status 1 (279.006117ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:238: "out/minikube-linux-amd64 -p functional-580825 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:240: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-580825 /tmp/TestFunctionalparallelMountCmdspecific-port2554122884/001:/mount-9p --alsologtostderr -v=1 --port 39445] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.86s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1295: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1330: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1335: Took "387.913802ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1344: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1349: Took "61.397347ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1381: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1386: Took "360.9292ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1394: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1399: Took "59.653543ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-580825 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.3
registry.k8s.io/kube-proxy:v1.34.3
registry.k8s.io/kube-controller-manager:v1.34.3
registry.k8s.io/kube-apiserver:v1.34.3
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
public.ecr.aws/docker/library/mysql:8.4
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-580825
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-580825
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-580825 image ls --format short --alsologtostderr:
I1222 22:42:51.592069  132733 out.go:360] Setting OutFile to fd 1 ...
I1222 22:42:51.592445  132733 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 22:42:51.592461  132733 out.go:374] Setting ErrFile to fd 2...
I1222 22:42:51.592468  132733 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 22:42:51.592804  132733 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-72233/.minikube/bin
I1222 22:42:51.593662  132733 config.go:182] Loaded profile config "functional-580825": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
I1222 22:42:51.593849  132733 config.go:182] Loaded profile config "functional-580825": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
I1222 22:42:51.594607  132733 cli_runner.go:164] Run: docker container inspect functional-580825 --format={{.State.Status}}
I1222 22:42:51.614414  132733 ssh_runner.go:195] Run: systemctl --version
I1222 22:42:51.614475  132733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-580825
I1222 22:42:51.637581  132733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-580825/id_rsa Username:docker}
I1222 22:42:51.744359  132733 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-580825 image ls --format table --alsologtostderr:
┌───────────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                       IMAGE                       │        TAG        │   IMAGE ID    │  SIZE  │
├───────────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ docker.io/library/minikube-local-cache-test       │ functional-580825 │ d83dc71f26374 │ 30B    │
│ registry.k8s.io/kube-proxy                        │ v1.34.3           │ 36eef8e07bdd6 │ 71.9MB │
│ public.ecr.aws/docker/library/mysql               │ 8.4               │ 20d0be4ee4524 │ 785MB  │
│ registry.k8s.io/pause                             │ latest            │ 350b164e7ae1d │ 240kB  │
│ public.ecr.aws/nginx/nginx                        │ alpine            │ 04da2b0513cd7 │ 53.7MB │
│ registry.k8s.io/kube-apiserver                    │ v1.34.3           │ aa27095f56193 │ 88MB   │
│ registry.k8s.io/coredns/coredns                   │ v1.12.1           │ 52546a367cc9e │ 75MB   │
│ registry.k8s.io/kube-scheduler                    │ v1.34.3           │ aec12dadf56dd │ 52.8MB │
│ registry.k8s.io/kube-controller-manager           │ v1.34.3           │ 5826b25d990d7 │ 74.9MB │
│ registry.k8s.io/pause                             │ 3.10.1            │ cd073f4c5f6a8 │ 736kB  │
│ gcr.io/k8s-minikube/storage-provisioner           │ v5                │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/pause                             │ 3.3               │ 0184c1613d929 │ 683kB  │
│ registry.k8s.io/etcd                              │ 3.6.5-0           │ a3e246e9556e9 │ 62.5MB │
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ functional-580825 │ 9056ab77afb8e │ 4.94MB │
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ latest            │ 9056ab77afb8e │ 4.94MB │
│ gcr.io/k8s-minikube/busybox                       │ 1.28.4-glibc      │ 56cc512116c8f │ 4.4MB  │
│ registry.k8s.io/pause                             │ 3.1               │ da86e6ba6ca19 │ 742kB  │
└───────────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-580825 image ls --format table --alsologtostderr:
I1222 22:42:51.837951  132976 out.go:360] Setting OutFile to fd 1 ...
I1222 22:42:51.838074  132976 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 22:42:51.838087  132976 out.go:374] Setting ErrFile to fd 2...
I1222 22:42:51.838092  132976 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 22:42:51.838353  132976 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-72233/.minikube/bin
I1222 22:42:51.839189  132976 config.go:182] Loaded profile config "functional-580825": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
I1222 22:42:51.839348  132976 config.go:182] Loaded profile config "functional-580825": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
I1222 22:42:51.839973  132976 cli_runner.go:164] Run: docker container inspect functional-580825 --format={{.State.Status}}
I1222 22:42:51.860002  132976 ssh_runner.go:195] Run: systemctl --version
I1222 22:42:51.860045  132976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-580825
I1222 22:42:51.879619  132976 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-580825/id_rsa Username:docker}
I1222 22:42:51.978882  132976 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-580825 image ls --format json --alsologtostderr:
[{"id":"d83dc71f26374dd7186dcbc75198cb28bf8c3cf49ac964aa0334ca3e9cbd5e90","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-580825"],"size":"30"},{"id":"aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.3"],"size":"88000000"},{"id":"36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.34.3"],"size":"71900000"},{"id":"20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438","repoDigests":[],"repoTags":["public.ecr.aws/docker/library/mysql:8.4"],"size":"785000000"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"736000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"aec12dadf56dd45659a682b94571f115a1be02ee4a
262b3b5176394f5c030c78","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.3"],"size":"52800000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.3"],"size":"74900000"},{"id":"04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5","repoDigests":[],"repoTags":["publi
c.ecr.aws/nginx/nginx:alpine"],"size":"53700000"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"62500000"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"75000000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-580825","ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest"],"size":"4940000"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-580825 image ls --format json --alsologtostderr:
I1222 22:42:51.588882  132731 out.go:360] Setting OutFile to fd 1 ...
I1222 22:42:51.588977  132731 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 22:42:51.588981  132731 out.go:374] Setting ErrFile to fd 2...
I1222 22:42:51.588986  132731 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 22:42:51.589285  132731 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-72233/.minikube/bin
I1222 22:42:51.589887  132731 config.go:182] Loaded profile config "functional-580825": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
I1222 22:42:51.589992  132731 config.go:182] Loaded profile config "functional-580825": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
I1222 22:42:51.590451  132731 cli_runner.go:164] Run: docker container inspect functional-580825 --format={{.State.Status}}
I1222 22:42:51.613068  132731 ssh_runner.go:195] Run: systemctl --version
I1222 22:42:51.613141  132731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-580825
I1222 22:42:51.637081  132731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-580825/id_rsa Username:docker}
I1222 22:42:51.744940  132731 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-580825 image ls --format yaml --alsologtostderr:
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "736000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: d83dc71f26374dd7186dcbc75198cb28bf8c3cf49ac964aa0334ca3e9cbd5e90
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-580825
size: "30"
- id: 36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.34.3
size: "71900000"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "75000000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.3
size: "88000000"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "62500000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.3
size: "52800000"
- id: 20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438
repoDigests: []
repoTags:
- public.ecr.aws/docker/library/mysql:8.4
size: "785000000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-580825
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
size: "4940000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5
repoDigests: []
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "53700000"
- id: 5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.3
size: "74900000"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-580825 image ls --format yaml --alsologtostderr:
I1222 22:42:51.603037  132730 out.go:360] Setting OutFile to fd 1 ...
I1222 22:42:51.603378  132730 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 22:42:51.603392  132730 out.go:374] Setting ErrFile to fd 2...
I1222 22:42:51.603399  132730 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 22:42:51.603753  132730 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-72233/.minikube/bin
I1222 22:42:51.604673  132730 config.go:182] Loaded profile config "functional-580825": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
I1222 22:42:51.604837  132730 config.go:182] Loaded profile config "functional-580825": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
I1222 22:42:51.605428  132730 cli_runner.go:164] Run: docker container inspect functional-580825 --format={{.State.Status}}
I1222 22:42:51.628058  132730 ssh_runner.go:195] Run: systemctl --version
I1222 22:42:51.628131  132730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-580825
I1222 22:42:51.652904  132730 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-580825/id_rsa Username:docker}
I1222 22:42:51.756537  132730 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-580825 ssh pgrep buildkitd: exit status 1 (297.343841ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 image build -t localhost/my-image:functional-580825 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-580825 image build -t localhost/my-image:functional-580825 testdata/build --alsologtostderr: (4.770356484s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-580825 image build -t localhost/my-image:functional-580825 testdata/build --alsologtostderr:
I1222 22:42:51.887673  132988 out.go:360] Setting OutFile to fd 1 ...
I1222 22:42:51.887800  132988 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 22:42:51.887812  132988 out.go:374] Setting ErrFile to fd 2...
I1222 22:42:51.887820  132988 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 22:42:51.888038  132988 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-72233/.minikube/bin
I1222 22:42:51.888642  132988 config.go:182] Loaded profile config "functional-580825": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
I1222 22:42:51.889343  132988 config.go:182] Loaded profile config "functional-580825": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
I1222 22:42:51.889899  132988 cli_runner.go:164] Run: docker container inspect functional-580825 --format={{.State.Status}}
I1222 22:42:51.908436  132988 ssh_runner.go:195] Run: systemctl --version
I1222 22:42:51.908489  132988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-580825
I1222 22:42:51.925243  132988 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-580825/id_rsa Username:docker}
I1222 22:42:52.025134  132988 build_images.go:162] Building image from path: /tmp/build.3340301785.tar
I1222 22:42:52.025207  132988 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1222 22:42:52.032941  132988 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3340301785.tar
I1222 22:42:52.036524  132988 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3340301785.tar: stat -c "%s %y" /var/lib/minikube/build/build.3340301785.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3340301785.tar': No such file or directory
I1222 22:42:52.036552  132988 ssh_runner.go:362] scp /tmp/build.3340301785.tar --> /var/lib/minikube/build/build.3340301785.tar (3072 bytes)
I1222 22:42:52.053570  132988 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3340301785
I1222 22:42:52.060896  132988 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3340301785 -xf /var/lib/minikube/build/build.3340301785.tar
I1222 22:42:52.068240  132988 docker.go:364] Building image: /var/lib/minikube/build/build.3340301785
I1222 22:42:52.068302  132988 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-580825 /var/lib/minikube/build/build.3340301785
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 2.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 1.1s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 1.1s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:ae5ef08fb7dbfe5aef8da1ac34929378a41c71b7e11ce8c2733936ce6990f356 done
#8 naming to localhost/my-image:functional-580825 done
#8 DONE 0.0s
I1222 22:42:56.570056  132988 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-580825 /var/lib/minikube/build/build.3340301785: (4.501718813s)
I1222 22:42:56.570147  132988 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3340301785
I1222 22:42:56.578444  132988 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3340301785.tar
I1222 22:42:56.585876  132988 build_images.go:218] Built localhost/my-image:functional-580825 from /tmp/build.3340301785.tar
I1222 22:42:56.585911  132988 build_images.go:134] succeeded building to: functional-580825
I1222 22:42:56.585917  132988 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (30.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0: (30.83388012s)
functional_test.go:362: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0 ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-580825
--- PASS: TestFunctional/parallel/ImageCommands/Setup (30.85s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-580825 /tmp/TestFunctionalparallelMountCmdVerifyCleanup358982040/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-580825 /tmp/TestFunctionalparallelMountCmdVerifyCleanup358982040/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-580825 /tmp/TestFunctionalparallelMountCmdVerifyCleanup358982040/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-580825 ssh "findmnt -T" /mount1: exit status 1 (348.109491ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 ssh "findmnt -T" /mount2
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 ssh "findmnt -T" /mount3
functional_test_mount_test.go:376: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-580825 --kill=true
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-580825 /tmp/TestFunctionalparallelMountCmdVerifyCleanup358982040/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-580825 /tmp/TestFunctionalparallelMountCmdVerifyCleanup358982040/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-580825 /tmp/TestFunctionalparallelMountCmdVerifyCleanup358982040/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-580825 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-580825 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-580825 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 129033: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-580825 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-580825 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (7.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-580825 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [510bd629-998e-463b-99f6-260770a14721] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [510bd629-998e-463b-99f6-260770a14721] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 7.002981913s
I1222 22:42:25.592944   75803 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (7.19s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-580825 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.109.193.124 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-580825 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1474: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1504: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 service list -o json
functional_test.go:1504: (dbg) Done: out/minikube-linux-amd64 -p functional-580825 service list -o json: (1.733140282s)
functional_test.go:1509: Took "1.733250813s" to run "out/minikube-linux-amd64 -p functional-580825 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1524: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 service --namespace=default --https --url hello-node
functional_test.go:1537: found endpoint: https://192.168.49.2:32677
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1574: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 service hello-node --url
functional_test.go:1580: found endpoint for hello-node: http://192.168.49.2:32677
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-580825 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-580825 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
functional_test.go:250: (dbg) Done: docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest: (1.961321394s)
functional_test.go:255: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-580825
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-580825 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 image save ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-580825 /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 image rm ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-580825 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 image load /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-580825
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-580825 image save --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-580825 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-580825
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-580825
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-580825
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-580825
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile
functional_test.go:1865: local sync path: /home/jenkins/minikube-integration/22301-72233/.minikube/files/etc/test/nested/copy/75803/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote (2.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote
functional_test.go:1069: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 cache add registry.k8s.io/pause:3.1
functional_test.go:1069: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 cache add registry.k8s.io/pause:3.3
functional_test.go:1069: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote (2.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local (1.67s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local
functional_test.go:1097: (dbg) Run:  docker build -t minikube-local-cache-test:functional-384766 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialCacheC3346681587/001
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 cache add minikube-local-cache-test:functional-384766
functional_test.go:1109: (dbg) Done: out/minikube-linux-amd64 -p functional-384766 cache add minikube-local-cache-test:functional-384766: (1.396175353s)
functional_test.go:1114: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 cache delete minikube-local-cache-test:functional-384766
functional_test.go:1103: (dbg) Run:  docker rmi minikube-local-cache-test:functional-384766
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local (1.67s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete
functional_test.go:1122: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list
functional_test.go:1130: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1144: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload (1.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload
functional_test.go:1167: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-384766 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (279.572192ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 cache reload
functional_test.go:1183: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload (1.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd (0.78s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd
functional_test.go:1256: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 logs
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd (0.78s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd (0.76s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialLogsFi3688790590/001/logs.txt
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd (0.76s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-384766 config get cpus: exit status 14 (92.645041ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 config set cpus 2
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 config get cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-384766 config get cpus: exit status 14 (73.568294ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun (0.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun
functional_test.go:994: (dbg) Run:  out/minikube-linux-amd64 start -p functional-384766 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-rc.1
functional_test.go:994: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-384766 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-rc.1: exit status 23 (159.354983ms)

                                                
                                                
-- stdout --
	* [functional-384766] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22301
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22301-72233/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-72233/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1222 23:10:00.311244  188979 out.go:360] Setting OutFile to fd 1 ...
	I1222 23:10:00.311505  188979 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 23:10:00.311516  188979 out.go:374] Setting ErrFile to fd 2...
	I1222 23:10:00.311521  188979 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 23:10:00.311715  188979 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-72233/.minikube/bin
	I1222 23:10:00.312153  188979 out.go:368] Setting JSON to false
	I1222 23:10:00.313132  188979 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":10340,"bootTime":1766434660,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1222 23:10:00.313189  188979 start.go:143] virtualization: kvm guest
	I1222 23:10:00.315016  188979 out.go:179] * [functional-384766] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1222 23:10:00.316277  188979 out.go:179]   - MINIKUBE_LOCATION=22301
	I1222 23:10:00.316272  188979 notify.go:221] Checking for updates...
	I1222 23:10:00.318578  188979 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 23:10:00.319786  188979 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1222 23:10:00.324125  188979 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-72233/.minikube
	I1222 23:10:00.325387  188979 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1222 23:10:00.326687  188979 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 23:10:00.328373  188979 config.go:182] Loaded profile config "functional-384766": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1222 23:10:00.328992  188979 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 23:10:00.350960  188979 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1222 23:10:00.351043  188979 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 23:10:00.405817  188979 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-12-22 23:10:00.396030114 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1222 23:10:00.405928  188979 docker.go:319] overlay module found
	I1222 23:10:00.407657  188979 out.go:179] * Using the docker driver based on existing profile
	I1222 23:10:00.408698  188979 start.go:309] selected driver: docker
	I1222 23:10:00.408712  188979 start.go:928] validating driver "docker" against &{Name:functional-384766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-384766 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 23:10:00.408788  188979 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 23:10:00.410565  188979 out.go:203] 
	W1222 23:10:00.411469  188979 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1222 23:10:00.412502  188979 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 start -p functional-384766 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-rc.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun (0.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage
functional_test.go:1040: (dbg) Run:  out/minikube-linux-amd64 start -p functional-384766 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-rc.1
functional_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-384766 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-rc.1: exit status 23 (168.82077ms)

                                                
                                                
-- stdout --
	* [functional-384766] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22301
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22301-72233/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-72233/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1222 23:10:00.147807  188876 out.go:360] Setting OutFile to fd 1 ...
	I1222 23:10:00.147898  188876 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 23:10:00.147905  188876 out.go:374] Setting ErrFile to fd 2...
	I1222 23:10:00.147910  188876 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 23:10:00.148184  188876 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-72233/.minikube/bin
	I1222 23:10:00.148643  188876 out.go:368] Setting JSON to false
	I1222 23:10:00.149608  188876 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":10340,"bootTime":1766434660,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1222 23:10:00.149679  188876 start.go:143] virtualization: kvm guest
	I1222 23:10:00.151808  188876 out.go:179] * [functional-384766] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1222 23:10:00.153065  188876 notify.go:221] Checking for updates...
	I1222 23:10:00.153082  188876 out.go:179]   - MINIKUBE_LOCATION=22301
	I1222 23:10:00.154186  188876 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1222 23:10:00.155415  188876 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22301-72233/kubeconfig
	I1222 23:10:00.156561  188876 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-72233/.minikube
	I1222 23:10:00.157507  188876 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1222 23:10:00.158654  188876 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1222 23:10:00.160731  188876 config.go:182] Loaded profile config "functional-384766": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1222 23:10:00.161556  188876 driver.go:422] Setting default libvirt URI to qemu:///system
	I1222 23:10:00.188078  188876 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1222 23:10:00.188192  188876 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 23:10:00.245082  188876 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-12-22 23:10:00.235243747 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1222 23:10:00.245252  188876 docker.go:319] overlay module found
	I1222 23:10:00.247068  188876 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1222 23:10:00.248278  188876 start.go:309] selected driver: docker
	I1222 23:10:00.248295  188876 start.go:928] validating driver "docker" against &{Name:functional-384766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766394456-22288@sha256:35aded7a4a0ae59b3c3af27bf7edc655e2fc3c5eaa3d1028779c0f2939f0c484 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-384766 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1222 23:10:00.248402  188876 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1222 23:10:00.250304  188876 out.go:203] 
	W1222 23:10:00.251438  188876 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1222 23:10:00.252587  188876 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd (0.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd
functional_test.go:1700: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 addons list
functional_test.go:1712: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd (0.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd (0.73s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd
functional_test.go:1735: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 ssh "echo hello"
functional_test.go:1752: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd (0.73s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd (2.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 ssh -n functional-384766 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 cp functional-384766:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelCpCm3180693308/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 ssh -n functional-384766 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 ssh -n functional-384766 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd (2.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync
functional_test.go:1939: Checking for existence of /etc/test/nested/copy/75803/hosts within VM
functional_test.go:1941: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 ssh "sudo cat /etc/test/nested/copy/75803/hosts"
functional_test.go:1946: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync (2.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync
functional_test.go:1982: Checking for existence of /etc/ssl/certs/75803.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 ssh "sudo cat /etc/ssl/certs/75803.pem"
functional_test.go:1982: Checking for existence of /usr/share/ca-certificates/75803.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 ssh "sudo cat /usr/share/ca-certificates/75803.pem"
functional_test.go:1982: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/758032.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 ssh "sudo cat /etc/ssl/certs/758032.pem"
functional_test.go:2009: Checking for existence of /usr/share/ca-certificates/758032.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 ssh "sudo cat /usr/share/ca-certificates/758032.pem"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync (2.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled (0.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled
functional_test.go:2037: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 ssh "sudo systemctl is-active crio"
functional_test.go:2037: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-384766 ssh "sudo systemctl is-active crio": exit status 1 (287.236317ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled (0.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License (0.59s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License
functional_test.go:2298: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License (0.59s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes
functional_test.go:2129: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2129: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters
functional_test.go:2129: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-384766 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1295: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list
functional_test.go:1330: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1335: Took "326.714977ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1344: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1349: Took "62.255627ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output
functional_test.go:1381: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1386: Took "372.12559ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1394: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1399: Took "74.54114ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port (1.88s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port
functional_test_mount_test.go:219: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-384766 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun683538616/001:/mount-9p --alsologtostderr -v=1 --port 39301]
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:249: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-384766 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (307.442235ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:263: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 ssh -- ls -la /mount-9p
functional_test_mount_test.go:267: guest mount directory contents
total 0
functional_test_mount_test.go:269: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-384766 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun683538616/001:/mount-9p --alsologtostderr -v=1 --port 39301] ...
functional_test_mount_test.go:270: reading mount text
functional_test_mount_test.go:284: done reading mount text
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-384766 ssh "sudo umount -f /mount-9p": exit status 1 (282.20458ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:238: "out/minikube-linux-amd64 -p functional-384766 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:240: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-384766 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun683538616/001:/mount-9p --alsologtostderr -v=1 --port 39301] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port (1.88s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup (1.81s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-384766 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1192915267/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-384766 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1192915267/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-384766 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1192915267/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-384766 ssh "findmnt -T" /mount1: exit status 1 (363.492401ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 ssh "findmnt -T" /mount2
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 ssh "findmnt -T" /mount3
functional_test_mount_test.go:376: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-384766 --kill=true
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-384766 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1192915267/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-384766 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1192915267/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-384766 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1192915267/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup (1.81s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components (0.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components
functional_test.go:2280: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components (0.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-384766 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-rc.1
registry.k8s.io/kube-proxy:v1.35.0-rc.1
registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
registry.k8s.io/kube-apiserver:v1.35.0-rc.1
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-384766
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/minikube-local-cache-test:functional-384766
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-384766 image ls --format short --alsologtostderr:
I1222 23:10:10.305493  192924 out.go:360] Setting OutFile to fd 1 ...
I1222 23:10:10.305648  192924 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 23:10:10.305659  192924 out.go:374] Setting ErrFile to fd 2...
I1222 23:10:10.305663  192924 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 23:10:10.305945  192924 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-72233/.minikube/bin
I1222 23:10:10.306613  192924 config.go:182] Loaded profile config "functional-384766": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
I1222 23:10:10.306756  192924 config.go:182] Loaded profile config "functional-384766": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
I1222 23:10:10.307205  192924 cli_runner.go:164] Run: docker container inspect functional-384766 --format={{.State.Status}}
I1222 23:10:10.328279  192924 ssh_runner.go:195] Run: systemctl --version
I1222 23:10:10.328324  192924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
I1222 23:10:10.346209  192924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
I1222 23:10:10.446854  192924 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-384766 image ls --format table --alsologtostderr:
┌───────────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                       IMAGE                       │        TAG        │   IMAGE ID    │  SIZE  │
├───────────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-controller-manager           │ v1.35.0-rc.1      │ 5032a56602e1b │ 75.8MB │
│ registry.k8s.io/kube-scheduler                    │ v1.35.0-rc.1      │ 73f80cdc073da │ 51.7MB │
│ registry.k8s.io/kube-apiserver                    │ v1.35.0-rc.1      │ 58865405a13bc │ 89.8MB │
│ registry.k8s.io/kube-proxy                        │ v1.35.0-rc.1      │ af0321f3a4f38 │ 70.7MB │
│ registry.k8s.io/etcd                              │ 3.6.6-0           │ 0a108f7189562 │ 62.5MB │
│ registry.k8s.io/pause                             │ 3.10.1            │ cd073f4c5f6a8 │ 736kB  │
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ functional-384766 │ 9056ab77afb8e │ 4.94MB │
│ gcr.io/k8s-minikube/storage-provisioner           │ v5                │ 6e38f40d628db │ 31.5MB │
│ docker.io/library/minikube-local-cache-test       │ functional-384766 │ d83dc71f26374 │ 30B    │
│ registry.k8s.io/coredns/coredns                   │ v1.13.1           │ aa5e3ebc0dfed │ 78.1MB │
│ registry.k8s.io/pause                             │ 3.3               │ 0184c1613d929 │ 683kB  │
│ registry.k8s.io/pause                             │ 3.1               │ da86e6ba6ca19 │ 742kB  │
│ registry.k8s.io/pause                             │ latest            │ 350b164e7ae1d │ 240kB  │
└───────────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-384766 image ls --format table --alsologtostderr:
I1222 23:10:10.787498  193191 out.go:360] Setting OutFile to fd 1 ...
I1222 23:10:10.787640  193191 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 23:10:10.787654  193191 out.go:374] Setting ErrFile to fd 2...
I1222 23:10:10.787661  193191 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 23:10:10.787846  193191 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-72233/.minikube/bin
I1222 23:10:10.788414  193191 config.go:182] Loaded profile config "functional-384766": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
I1222 23:10:10.788514  193191 config.go:182] Loaded profile config "functional-384766": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
I1222 23:10:10.788981  193191 cli_runner.go:164] Run: docker container inspect functional-384766 --format={{.State.Status}}
I1222 23:10:10.807663  193191 ssh_runner.go:195] Run: systemctl --version
I1222 23:10:10.807714  193191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
I1222 23:10:10.825268  193191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
I1222 23:10:10.926023  193191 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-384766 image ls --format json --alsologtostderr:
[{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"d83dc71f26374dd7186dcbc75198cb28bf8c3cf49ac964aa0334ca3e9cbd5e90","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-384766"],"size":"30"},{"id":"73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-rc.1"],"size":"51700000"},{"id":"5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"],"size":"75800000"},{"id":"af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-rc.1"],"size":"70700000"},{"id":"0a108f7189562e99793b
decab61fdf1a7c9d913af3385de9da17fb9d6ff430e2","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.6-0"],"size":"62500000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-384766"],"size":"4940000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-rc.1"],"size":"89800000"},{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"78100000"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee
4a9cf9e803f","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"736000"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-384766 image ls --format json --alsologtostderr:
I1222 23:10:10.544639  193036 out.go:360] Setting OutFile to fd 1 ...
I1222 23:10:10.544909  193036 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 23:10:10.544920  193036 out.go:374] Setting ErrFile to fd 2...
I1222 23:10:10.544927  193036 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 23:10:10.545126  193036 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-72233/.minikube/bin
I1222 23:10:10.545695  193036 config.go:182] Loaded profile config "functional-384766": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
I1222 23:10:10.545813  193036 config.go:182] Loaded profile config "functional-384766": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
I1222 23:10:10.546273  193036 cli_runner.go:164] Run: docker container inspect functional-384766 --format={{.State.Status}}
I1222 23:10:10.564619  193036 ssh_runner.go:195] Run: systemctl --version
I1222 23:10:10.564692  193036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
I1222 23:10:10.583615  193036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
I1222 23:10:10.687182  193036 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-384766 image ls --format yaml --alsologtostderr:
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-384766
size: "4940000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
size: "75800000"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "78100000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: d83dc71f26374dd7186dcbc75198cb28bf8c3cf49ac964aa0334ca3e9cbd5e90
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-384766
size: "30"
- id: 58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-rc.1
size: "89800000"
- id: 73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-rc.1
size: "51700000"
- id: af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-rc.1
size: "70700000"
- id: 0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.6-0
size: "62500000"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "736000"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-384766 image ls --format yaml --alsologtostderr:
I1222 23:10:10.308115  192923 out.go:360] Setting OutFile to fd 1 ...
I1222 23:10:10.308221  192923 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 23:10:10.308233  192923 out.go:374] Setting ErrFile to fd 2...
I1222 23:10:10.308237  192923 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 23:10:10.308440  192923 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-72233/.minikube/bin
I1222 23:10:10.309052  192923 config.go:182] Loaded profile config "functional-384766": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
I1222 23:10:10.309165  192923 config.go:182] Loaded profile config "functional-384766": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
I1222 23:10:10.309612  192923 cli_runner.go:164] Run: docker container inspect functional-384766 --format={{.State.Status}}
I1222 23:10:10.328224  192923 ssh_runner.go:195] Run: systemctl --version
I1222 23:10:10.328278  192923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
I1222 23:10:10.347209  192923 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
I1222 23:10:10.446869  192923 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild (5.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-384766 ssh pgrep buildkitd: exit status 1 (273.589154ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 image build -t localhost/my-image:functional-384766 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-384766 image build -t localhost/my-image:functional-384766 testdata/build --alsologtostderr: (4.797428734s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-384766 image build -t localhost/my-image:functional-384766 testdata/build --alsologtostderr:
I1222 23:10:10.817823  193202 out.go:360] Setting OutFile to fd 1 ...
I1222 23:10:10.818094  193202 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 23:10:10.818104  193202 out.go:374] Setting ErrFile to fd 2...
I1222 23:10:10.818109  193202 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 23:10:10.818360  193202 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-72233/.minikube/bin
I1222 23:10:10.819053  193202 config.go:182] Loaded profile config "functional-384766": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
I1222 23:10:10.819662  193202 config.go:182] Loaded profile config "functional-384766": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
I1222 23:10:10.820154  193202 cli_runner.go:164] Run: docker container inspect functional-384766 --format={{.State.Status}}
I1222 23:10:10.837663  193202 ssh_runner.go:195] Run: systemctl --version
I1222 23:10:10.837711  193202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384766
I1222 23:10:10.855104  193202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/functional-384766/id_rsa Username:docker}
I1222 23:10:10.954154  193202 build_images.go:162] Building image from path: /tmp/build.234050813.tar
I1222 23:10:10.954214  193202 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1222 23:10:10.962063  193202 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.234050813.tar
I1222 23:10:10.965587  193202 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.234050813.tar: stat -c "%s %y" /var/lib/minikube/build/build.234050813.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.234050813.tar': No such file or directory
I1222 23:10:10.965633  193202 ssh_runner.go:362] scp /tmp/build.234050813.tar --> /var/lib/minikube/build/build.234050813.tar (3072 bytes)
I1222 23:10:10.983334  193202 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.234050813
I1222 23:10:10.990775  193202 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.234050813 -xf /var/lib/minikube/build/build.234050813.tar
I1222 23:10:10.998323  193202 docker.go:364] Building image: /var/lib/minikube/build/build.234050813
I1222 23:10:10.998385  193202 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-384766 /var/lib/minikube/build/build.234050813
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 2.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 1.0s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa done
#5 DONE 1.1s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:e884e50bc09158a4fa23f7d6afb37c55492a118ccd725d776173c5f62db17795 done
#8 naming to localhost/my-image:functional-384766 done
#8 DONE 0.0s
I1222 23:10:15.519238  193202 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-384766 /var/lib/minikube/build/build.234050813: (4.520818488s)
I1222 23:10:15.519303  193202 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.234050813
I1222 23:10:15.527452  193202 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.234050813.tar
I1222 23:10:15.534760  193202 build_images.go:218] Built localhost/my-image:functional-384766 from /tmp/build.234050813.tar
I1222 23:10:15.534797  193202 build_images.go:134] succeeded building to: functional-384766
I1222 23:10:15.534804  193202 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild (5.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup (1.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0: (1.093964898s)
functional_test.go:362: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0 ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-384766
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup (1.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon (1.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-384766 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon (1.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon (0.8s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-384766 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon (0.80s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon (1.75s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-384766
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-384766 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon (1.75s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 image save ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-384766 /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 image rm ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-384766 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile (0.6s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 image load /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile (0.60s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-384766
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-384766 image save --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-384766 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-384766
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-384766 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: exit status 103
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-384766
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-384766
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-384766
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (162.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker
E1222 23:14:53.441843   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:14:53.447147   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:14:53.457382   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:14:53.477662   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:14:53.517935   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:14:53.598264   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:14:53.758654   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:14:54.079228   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:14:54.719568   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:14:56.000088   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:14:58.561866   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:15:03.682313   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:15:13.922564   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:15:31.033619   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/addons-268945/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:15:34.403109   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:16:15.363877   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:16:30.659474   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-580825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-070728 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker: (2m41.280957002s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (162.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (9.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-070728 kubectl -- rollout status deployment/busybox: (7.175708594s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 kubectl -- exec busybox-7b57f96db7-dqxqn -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 kubectl -- exec busybox-7b57f96db7-ftf9p -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 kubectl -- exec busybox-7b57f96db7-nl876 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 kubectl -- exec busybox-7b57f96db7-dqxqn -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 kubectl -- exec busybox-7b57f96db7-ftf9p -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 kubectl -- exec busybox-7b57f96db7-nl876 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 kubectl -- exec busybox-7b57f96db7-dqxqn -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 kubectl -- exec busybox-7b57f96db7-ftf9p -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 kubectl -- exec busybox-7b57f96db7-nl876 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (9.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 kubectl -- exec busybox-7b57f96db7-dqxqn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 kubectl -- exec busybox-7b57f96db7-dqxqn -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 kubectl -- exec busybox-7b57f96db7-ftf9p -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 kubectl -- exec busybox-7b57f96db7-ftf9p -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 kubectl -- exec busybox-7b57f96db7-nl876 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 kubectl -- exec busybox-7b57f96db7-nl876 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (33.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-070728 node add --alsologtostderr -v 5: (33.033635693s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (33.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-070728 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (17.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 status --output json --alsologtostderr -v 5
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 cp testdata/cp-test.txt ha-070728:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 ssh -n ha-070728 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 cp ha-070728:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1880636838/001/cp-test_ha-070728.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 ssh -n ha-070728 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 cp ha-070728:/home/docker/cp-test.txt ha-070728-m02:/home/docker/cp-test_ha-070728_ha-070728-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 ssh -n ha-070728 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 ssh -n ha-070728-m02 "sudo cat /home/docker/cp-test_ha-070728_ha-070728-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 cp ha-070728:/home/docker/cp-test.txt ha-070728-m03:/home/docker/cp-test_ha-070728_ha-070728-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 ssh -n ha-070728 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 ssh -n ha-070728-m03 "sudo cat /home/docker/cp-test_ha-070728_ha-070728-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 cp ha-070728:/home/docker/cp-test.txt ha-070728-m04:/home/docker/cp-test_ha-070728_ha-070728-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 ssh -n ha-070728 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 ssh -n ha-070728-m04 "sudo cat /home/docker/cp-test_ha-070728_ha-070728-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 cp testdata/cp-test.txt ha-070728-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 ssh -n ha-070728-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 cp ha-070728-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1880636838/001/cp-test_ha-070728-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 ssh -n ha-070728-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 cp ha-070728-m02:/home/docker/cp-test.txt ha-070728:/home/docker/cp-test_ha-070728-m02_ha-070728.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 ssh -n ha-070728-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 ssh -n ha-070728 "sudo cat /home/docker/cp-test_ha-070728-m02_ha-070728.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 cp ha-070728-m02:/home/docker/cp-test.txt ha-070728-m03:/home/docker/cp-test_ha-070728-m02_ha-070728-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 ssh -n ha-070728-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 ssh -n ha-070728-m03 "sudo cat /home/docker/cp-test_ha-070728-m02_ha-070728-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 cp ha-070728-m02:/home/docker/cp-test.txt ha-070728-m04:/home/docker/cp-test_ha-070728-m02_ha-070728-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 ssh -n ha-070728-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 ssh -n ha-070728-m04 "sudo cat /home/docker/cp-test_ha-070728-m02_ha-070728-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 cp testdata/cp-test.txt ha-070728-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 ssh -n ha-070728-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 cp ha-070728-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1880636838/001/cp-test_ha-070728-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 ssh -n ha-070728-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 cp ha-070728-m03:/home/docker/cp-test.txt ha-070728:/home/docker/cp-test_ha-070728-m03_ha-070728.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 ssh -n ha-070728-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 ssh -n ha-070728 "sudo cat /home/docker/cp-test_ha-070728-m03_ha-070728.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 cp ha-070728-m03:/home/docker/cp-test.txt ha-070728-m02:/home/docker/cp-test_ha-070728-m03_ha-070728-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 ssh -n ha-070728-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 ssh -n ha-070728-m02 "sudo cat /home/docker/cp-test_ha-070728-m03_ha-070728-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 cp ha-070728-m03:/home/docker/cp-test.txt ha-070728-m04:/home/docker/cp-test_ha-070728-m03_ha-070728-m04.txt
E1222 23:17:37.284757   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 ssh -n ha-070728-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 ssh -n ha-070728-m04 "sudo cat /home/docker/cp-test_ha-070728-m03_ha-070728-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 cp testdata/cp-test.txt ha-070728-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 ssh -n ha-070728-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 cp ha-070728-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1880636838/001/cp-test_ha-070728-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 ssh -n ha-070728-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 cp ha-070728-m04:/home/docker/cp-test.txt ha-070728:/home/docker/cp-test_ha-070728-m04_ha-070728.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 ssh -n ha-070728-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 ssh -n ha-070728 "sudo cat /home/docker/cp-test_ha-070728-m04_ha-070728.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 cp ha-070728-m04:/home/docker/cp-test.txt ha-070728-m02:/home/docker/cp-test_ha-070728-m04_ha-070728-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 ssh -n ha-070728-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 ssh -n ha-070728-m02 "sudo cat /home/docker/cp-test_ha-070728-m04_ha-070728-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 cp ha-070728-m04:/home/docker/cp-test.txt ha-070728-m03:/home/docker/cp-test_ha-070728-m04_ha-070728-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 ssh -n ha-070728-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 ssh -n ha-070728-m03 "sudo cat /home/docker/cp-test_ha-070728-m04_ha-070728-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (17.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-070728 node stop m02 --alsologtostderr -v 5: (10.997690243s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-070728 status --alsologtostderr -v 5: exit status 7 (712.056643ms)

                                                
                                                
-- stdout --
	ha-070728
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-070728-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-070728-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-070728-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1222 23:17:53.456094  225117 out.go:360] Setting OutFile to fd 1 ...
	I1222 23:17:53.456192  225117 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 23:17:53.456201  225117 out.go:374] Setting ErrFile to fd 2...
	I1222 23:17:53.456216  225117 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 23:17:53.456448  225117 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-72233/.minikube/bin
	I1222 23:17:53.456688  225117 out.go:368] Setting JSON to false
	I1222 23:17:53.456716  225117 mustload.go:66] Loading cluster: ha-070728
	I1222 23:17:53.456820  225117 notify.go:221] Checking for updates...
	I1222 23:17:53.457245  225117 config.go:182] Loaded profile config "ha-070728": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1222 23:17:53.457264  225117 status.go:174] checking status of ha-070728 ...
	I1222 23:17:53.458026  225117 cli_runner.go:164] Run: docker container inspect ha-070728 --format={{.State.Status}}
	I1222 23:17:53.478618  225117 status.go:371] ha-070728 host status = "Running" (err=<nil>)
	I1222 23:17:53.478643  225117 host.go:66] Checking if "ha-070728" exists ...
	I1222 23:17:53.478943  225117 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-070728
	I1222 23:17:53.497145  225117 host.go:66] Checking if "ha-070728" exists ...
	I1222 23:17:53.497406  225117 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 23:17:53.497457  225117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-070728
	I1222 23:17:53.514379  225117 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/ha-070728/id_rsa Username:docker}
	I1222 23:17:53.614812  225117 ssh_runner.go:195] Run: systemctl --version
	I1222 23:17:53.620955  225117 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 23:17:53.632509  225117 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 23:17:53.688340  225117 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:true NGoroutines:74 SystemTime:2025-12-22 23:17:53.678559427 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1222 23:17:53.689006  225117 kubeconfig.go:125] found "ha-070728" server: "https://192.168.49.254:8443"
	I1222 23:17:53.689042  225117 api_server.go:166] Checking apiserver status ...
	I1222 23:17:53.689103  225117 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:17:53.704171  225117 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2325/cgroup
	I1222 23:17:53.712178  225117 api_server.go:182] apiserver freezer: "10:freezer:/docker/d317eefb92b8e0bd3a1fc2b42c682b29aca0e0221c5ac30bbe71ad2932a27e6c/kubepods/burstable/pod2adb31b7ca10a02083c945ea6a60909e/f3e1b78c0cef526d817f9fd8d090deb43861473e4297f905b4525c953f55c879"
	I1222 23:17:53.712236  225117 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d317eefb92b8e0bd3a1fc2b42c682b29aca0e0221c5ac30bbe71ad2932a27e6c/kubepods/burstable/pod2adb31b7ca10a02083c945ea6a60909e/f3e1b78c0cef526d817f9fd8d090deb43861473e4297f905b4525c953f55c879/freezer.state
	I1222 23:17:53.719318  225117 api_server.go:204] freezer state: "THAWED"
	I1222 23:17:53.719341  225117 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1222 23:17:53.724736  225117 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1222 23:17:53.724763  225117 status.go:463] ha-070728 apiserver status = Running (err=<nil>)
	I1222 23:17:53.724775  225117 status.go:176] ha-070728 status: &{Name:ha-070728 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1222 23:17:53.724793  225117 status.go:174] checking status of ha-070728-m02 ...
	I1222 23:17:53.725018  225117 cli_runner.go:164] Run: docker container inspect ha-070728-m02 --format={{.State.Status}}
	I1222 23:17:53.742724  225117 status.go:371] ha-070728-m02 host status = "Stopped" (err=<nil>)
	I1222 23:17:53.742745  225117 status.go:384] host is not running, skipping remaining checks
	I1222 23:17:53.742753  225117 status.go:176] ha-070728-m02 status: &{Name:ha-070728-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1222 23:17:53.742775  225117 status.go:174] checking status of ha-070728-m03 ...
	I1222 23:17:53.743040  225117 cli_runner.go:164] Run: docker container inspect ha-070728-m03 --format={{.State.Status}}
	I1222 23:17:53.760558  225117 status.go:371] ha-070728-m03 host status = "Running" (err=<nil>)
	I1222 23:17:53.760579  225117 host.go:66] Checking if "ha-070728-m03" exists ...
	I1222 23:17:53.760888  225117 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-070728-m03
	I1222 23:17:53.777956  225117 host.go:66] Checking if "ha-070728-m03" exists ...
	I1222 23:17:53.778247  225117 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 23:17:53.778295  225117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-070728-m03
	I1222 23:17:53.795551  225117 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/ha-070728-m03/id_rsa Username:docker}
	I1222 23:17:53.892896  225117 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 23:17:53.905338  225117 kubeconfig.go:125] found "ha-070728" server: "https://192.168.49.254:8443"
	I1222 23:17:53.905372  225117 api_server.go:166] Checking apiserver status ...
	I1222 23:17:53.905413  225117 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:17:53.917832  225117 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2233/cgroup
	I1222 23:17:53.925619  225117 api_server.go:182] apiserver freezer: "10:freezer:/docker/d125a9a0949af4dbff717dd2acd69654d8974f4ba1fe24a3877cb4e9db7fb69d/kubepods/burstable/pod7a62d5857cda98dc3673ac9f8a632bba/9cb44f3693034cd43cfd94049e373f5e4351b110b0372f28242d981641998e91"
	I1222 23:17:53.925674  225117 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d125a9a0949af4dbff717dd2acd69654d8974f4ba1fe24a3877cb4e9db7fb69d/kubepods/burstable/pod7a62d5857cda98dc3673ac9f8a632bba/9cb44f3693034cd43cfd94049e373f5e4351b110b0372f28242d981641998e91/freezer.state
	I1222 23:17:53.933505  225117 api_server.go:204] freezer state: "THAWED"
	I1222 23:17:53.933536  225117 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1222 23:17:53.937665  225117 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1222 23:17:53.937686  225117 status.go:463] ha-070728-m03 apiserver status = Running (err=<nil>)
	I1222 23:17:53.937694  225117 status.go:176] ha-070728-m03 status: &{Name:ha-070728-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1222 23:17:53.937708  225117 status.go:174] checking status of ha-070728-m04 ...
	I1222 23:17:53.937933  225117 cli_runner.go:164] Run: docker container inspect ha-070728-m04 --format={{.State.Status}}
	I1222 23:17:53.957431  225117 status.go:371] ha-070728-m04 host status = "Running" (err=<nil>)
	I1222 23:17:53.957454  225117 host.go:66] Checking if "ha-070728-m04" exists ...
	I1222 23:17:53.957720  225117 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-070728-m04
	I1222 23:17:53.975047  225117 host.go:66] Checking if "ha-070728-m04" exists ...
	I1222 23:17:53.975282  225117 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 23:17:53.975319  225117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-070728-m04
	I1222 23:17:53.993132  225117 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/ha-070728-m04/id_rsa Username:docker}
	I1222 23:17:54.092009  225117 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 23:17:54.103922  225117 status.go:176] ha-070728-m04 status: &{Name:ha-070728-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (40.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-070728 node start m02 --alsologtostderr -v 5: (39.724710942s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (40.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (147.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-070728 stop --alsologtostderr -v 5: (34.585538977s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 start --wait true --alsologtostderr -v 5
E1222 23:19:33.714849   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-580825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:19:53.441759   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:20:21.125457   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:20:31.032833   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/addons-268945/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-070728 start --wait true --alsologtostderr -v 5: (1m52.395590795s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (147.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-070728 node delete m03 --alsologtostderr -v 5: (8.665089621s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (33.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 stop --alsologtostderr -v 5
E1222 23:21:30.663004   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-580825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-070728 stop --alsologtostderr -v 5: (33.531152972s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-070728 status --alsologtostderr -v 5: exit status 7 (115.62146ms)

                                                
                                                
-- stdout --
	ha-070728
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-070728-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-070728-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1222 23:21:47.478000  255182 out.go:360] Setting OutFile to fd 1 ...
	I1222 23:21:47.478279  255182 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 23:21:47.478291  255182 out.go:374] Setting ErrFile to fd 2...
	I1222 23:21:47.478298  255182 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 23:21:47.478529  255182 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-72233/.minikube/bin
	I1222 23:21:47.478736  255182 out.go:368] Setting JSON to false
	I1222 23:21:47.478768  255182 mustload.go:66] Loading cluster: ha-070728
	I1222 23:21:47.478836  255182 notify.go:221] Checking for updates...
	I1222 23:21:47.479147  255182 config.go:182] Loaded profile config "ha-070728": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1222 23:21:47.479165  255182 status.go:174] checking status of ha-070728 ...
	I1222 23:21:47.479693  255182 cli_runner.go:164] Run: docker container inspect ha-070728 --format={{.State.Status}}
	I1222 23:21:47.498385  255182 status.go:371] ha-070728 host status = "Stopped" (err=<nil>)
	I1222 23:21:47.498405  255182 status.go:384] host is not running, skipping remaining checks
	I1222 23:21:47.498419  255182 status.go:176] ha-070728 status: &{Name:ha-070728 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1222 23:21:47.498468  255182 status.go:174] checking status of ha-070728-m02 ...
	I1222 23:21:47.498731  255182 cli_runner.go:164] Run: docker container inspect ha-070728-m02 --format={{.State.Status}}
	I1222 23:21:47.516619  255182 status.go:371] ha-070728-m02 host status = "Stopped" (err=<nil>)
	I1222 23:21:47.516640  255182 status.go:384] host is not running, skipping remaining checks
	I1222 23:21:47.516648  255182 status.go:176] ha-070728-m02 status: &{Name:ha-070728-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1222 23:21:47.516671  255182 status.go:174] checking status of ha-070728-m04 ...
	I1222 23:21:47.516915  255182 cli_runner.go:164] Run: docker container inspect ha-070728-m04 --format={{.State.Status}}
	I1222 23:21:47.533845  255182 status.go:371] ha-070728-m04 host status = "Stopped" (err=<nil>)
	I1222 23:21:47.533918  255182 status.go:384] host is not running, skipping remaining checks
	I1222 23:21:47.533929  255182 status.go:176] ha-070728-m04 status: &{Name:ha-070728-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (33.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (77.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-070728 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker: (1m16.457806585s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (77.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (48.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-070728 node add --control-plane --alsologtostderr -v 5: (47.3993649s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-070728 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (48.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.90s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (25s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-764144 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-764144 --driver=docker  --container-runtime=docker: (25.001890932s)
--- PASS: TestImageBuild/serial/Setup (25.00s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.07s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-764144
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-764144: (1.07290713s)
--- PASS: TestImageBuild/serial/NormalBuild (1.07s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.68s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-764144
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.68s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.5s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-764144
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.50s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.49s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-764144
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.49s)

                                                
                                    
x
+
TestJSONOutput/start/Command (67.6s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-273667 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=docker
E1222 23:24:53.440786   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:25:31.033339   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/addons-268945/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-273667 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=docker: (1m7.598622479s)
--- PASS: TestJSONOutput/start/Command (67.60s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.54s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-273667 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.54s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.53s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-273667 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.53s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (11.06s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-273667 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-273667 --output=json --user=testUser: (11.05696639s)
--- PASS: TestJSONOutput/stop/Command (11.06s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-815866 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-815866 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (74.054455ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1412d44a-828a-4b00-b522-c9b6d8fc8d93","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-815866] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3c54d2d6-bf18-4047-8ca5-762d76b43ceb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22301"}}
	{"specversion":"1.0","id":"bde5510b-3b62-4abe-879b-7469a5552da8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"79418345-8e6c-448d-b4e8-616500ac187b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22301-72233/kubeconfig"}}
	{"specversion":"1.0","id":"14d168d0-9fed-43d2-ac45-3607bf3d6fef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-72233/.minikube"}}
	{"specversion":"1.0","id":"48fda768-1a17-43f8-93f9-50f319941bea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"5fdc3bd2-ad3d-44ba-8a94-1f3c6ac93bf0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b5ea81de-4a42-4c43-ba9e-b267a0ae4acd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-815866" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-815866
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (27.86s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-134632 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-134632 --network=: (25.683206049s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-134632" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-134632
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-134632: (2.158416501s)
--- PASS: TestKicCustomNetwork/create_custom_network (27.86s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (27.24s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-346415 --network=bridge
E1222 23:26:30.662774   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-580825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-346415 --network=bridge: (25.174970764s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-346415" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-346415
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-346415: (2.047176396s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (27.24s)

                                                
                                    
x
+
TestKicExistingNetwork (24.17s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1222 23:26:49.466536   75803 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1222 23:26:49.482829   75803 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1222 23:26:49.482896   75803 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1222 23:26:49.482912   75803 cli_runner.go:164] Run: docker network inspect existing-network
W1222 23:26:49.499765   75803 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1222 23:26:49.499812   75803 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1222 23:26:49.499833   75803 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1222 23:26:49.499968   75803 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1222 23:26:49.516509   75803 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6d900dc18f14 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3e:30:89:aa:a7:2c} reservation:<nil>}
I1222 23:26:49.516913   75803 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0019b50e0}
I1222 23:26:49.516940   75803 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1222 23:26:49.516982   75803 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1222 23:26:49.564573   75803 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-621204 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-621204 --network=existing-network: (22.019622883s)
helpers_test.go:176: Cleaning up "existing-network-621204" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-621204
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-621204: (2.019501408s)
I1222 23:27:13.620780   75803 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (24.17s)

                                                
                                    
x
+
TestKicCustomSubnet (29.84s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-293774 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-293774 --subnet=192.168.60.0/24: (27.669014268s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-293774 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-293774" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-293774
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-293774: (2.148964686s)
--- PASS: TestKicCustomSubnet (29.84s)

                                                
                                    
x
+
TestKicStaticIP (29.29s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-872112 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-872112 --static-ip=192.168.200.200: (26.9628423s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-872112 ip
helpers_test.go:176: Cleaning up "static-ip-872112" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-872112
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-872112: (2.170185415s)
--- PASS: TestKicStaticIP (29.29s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (58.11s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-601550 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-601550 --driver=docker  --container-runtime=docker: (25.452798557s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-605243 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-605243 --driver=docker  --container-runtime=docker: (27.058521347s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-601550
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-605243
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:176: Cleaning up "second-605243" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p second-605243
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p second-605243: (2.183224298s)
helpers_test.go:176: Cleaning up "first-601550" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p first-601550
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p first-601550: (2.170562948s)
--- PASS: TestMinikubeProfile (58.11s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.38s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-221096 --memory=3072 --mount-string /tmp/TestMountStartserial3910742196/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-221096 --memory=3072 --mount-string /tmp/TestMountStartserial3910742196/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (8.379514661s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.38s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-221096 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.38s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-247562 --memory=3072 --mount-string /tmp/TestMountStartserial3910742196/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-247562 --memory=3072 --mount-string /tmp/TestMountStartserial3910742196/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (8.377676443s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.38s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-247562 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.57s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-221096 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-221096 --alsologtostderr -v=5: (1.568208559s)
--- PASS: TestMountStart/serial/DeleteFirst (1.57s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-247562 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-247562
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-247562: (1.296094915s)
--- PASS: TestMountStart/serial/Stop (1.30s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (10.18s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-247562
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-247562: (9.182065612s)
--- PASS: TestMountStart/serial/RestartStopped (10.18s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-247562 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (83.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-206716 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=docker
E1222 23:29:53.440754   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:30:14.089282   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/addons-268945/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:30:31.033114   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/addons-268945/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-206716 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=docker: (1m22.840415261s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206716 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (83.35s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (8.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-206716 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-206716 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-206716 -- rollout status deployment/busybox: (6.370181807s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-206716 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-206716 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-206716 -- exec busybox-7b57f96db7-n8942 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-206716 -- exec busybox-7b57f96db7-q6rpw -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-206716 -- exec busybox-7b57f96db7-n8942 -- nslookup kubernetes.default
E1222 23:31:16.485750   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-206716 -- exec busybox-7b57f96db7-q6rpw -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-206716 -- exec busybox-7b57f96db7-n8942 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-206716 -- exec busybox-7b57f96db7-q6rpw -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (8.23s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-206716 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-206716 -- exec busybox-7b57f96db7-n8942 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-206716 -- exec busybox-7b57f96db7-n8942 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-206716 -- exec busybox-7b57f96db7-q6rpw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-206716 -- exec busybox-7b57f96db7-q6rpw -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.88s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (33.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-206716 -v=5 --alsologtostderr
E1222 23:31:30.659376   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-580825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-206716 -v=5 --alsologtostderr: (32.934903983s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206716 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (33.59s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-206716 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.66s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206716 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206716 cp testdata/cp-test.txt multinode-206716:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206716 ssh -n multinode-206716 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206716 cp multinode-206716:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3370474177/001/cp-test_multinode-206716.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206716 ssh -n multinode-206716 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206716 cp multinode-206716:/home/docker/cp-test.txt multinode-206716-m02:/home/docker/cp-test_multinode-206716_multinode-206716-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206716 ssh -n multinode-206716 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206716 ssh -n multinode-206716-m02 "sudo cat /home/docker/cp-test_multinode-206716_multinode-206716-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206716 cp multinode-206716:/home/docker/cp-test.txt multinode-206716-m03:/home/docker/cp-test_multinode-206716_multinode-206716-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206716 ssh -n multinode-206716 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206716 ssh -n multinode-206716-m03 "sudo cat /home/docker/cp-test_multinode-206716_multinode-206716-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206716 cp testdata/cp-test.txt multinode-206716-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206716 ssh -n multinode-206716-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206716 cp multinode-206716-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3370474177/001/cp-test_multinode-206716-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206716 ssh -n multinode-206716-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206716 cp multinode-206716-m02:/home/docker/cp-test.txt multinode-206716:/home/docker/cp-test_multinode-206716-m02_multinode-206716.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206716 ssh -n multinode-206716-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206716 ssh -n multinode-206716 "sudo cat /home/docker/cp-test_multinode-206716-m02_multinode-206716.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206716 cp multinode-206716-m02:/home/docker/cp-test.txt multinode-206716-m03:/home/docker/cp-test_multinode-206716-m02_multinode-206716-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206716 ssh -n multinode-206716-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206716 ssh -n multinode-206716-m03 "sudo cat /home/docker/cp-test_multinode-206716-m02_multinode-206716-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206716 cp testdata/cp-test.txt multinode-206716-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206716 ssh -n multinode-206716-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206716 cp multinode-206716-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3370474177/001/cp-test_multinode-206716-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206716 ssh -n multinode-206716-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206716 cp multinode-206716-m03:/home/docker/cp-test.txt multinode-206716:/home/docker/cp-test_multinode-206716-m03_multinode-206716.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206716 ssh -n multinode-206716-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206716 ssh -n multinode-206716 "sudo cat /home/docker/cp-test_multinode-206716-m03_multinode-206716.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206716 cp multinode-206716-m03:/home/docker/cp-test.txt multinode-206716-m02:/home/docker/cp-test_multinode-206716-m03_multinode-206716-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206716 ssh -n multinode-206716-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206716 ssh -n multinode-206716-m02 "sudo cat /home/docker/cp-test_multinode-206716-m03_multinode-206716-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.08s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206716 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-206716 node stop m03: (1.288793243s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206716 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-206716 status: exit status 7 (491.162345ms)

                                                
                                                
-- stdout --
	multinode-206716
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-206716-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-206716-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206716 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-206716 status --alsologtostderr: exit status 7 (494.675886ms)

                                                
                                                
-- stdout --
	multinode-206716
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-206716-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-206716-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1222 23:32:04.302333  340978 out.go:360] Setting OutFile to fd 1 ...
	I1222 23:32:04.302780  340978 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 23:32:04.302790  340978 out.go:374] Setting ErrFile to fd 2...
	I1222 23:32:04.302797  340978 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 23:32:04.303003  340978 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-72233/.minikube/bin
	I1222 23:32:04.303189  340978 out.go:368] Setting JSON to false
	I1222 23:32:04.303217  340978 mustload.go:66] Loading cluster: multinode-206716
	I1222 23:32:04.303343  340978 notify.go:221] Checking for updates...
	I1222 23:32:04.303618  340978 config.go:182] Loaded profile config "multinode-206716": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1222 23:32:04.303636  340978 status.go:174] checking status of multinode-206716 ...
	I1222 23:32:04.304095  340978 cli_runner.go:164] Run: docker container inspect multinode-206716 --format={{.State.Status}}
	I1222 23:32:04.324981  340978 status.go:371] multinode-206716 host status = "Running" (err=<nil>)
	I1222 23:32:04.325000  340978 host.go:66] Checking if "multinode-206716" exists ...
	I1222 23:32:04.325249  340978 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-206716
	I1222 23:32:04.342222  340978 host.go:66] Checking if "multinode-206716" exists ...
	I1222 23:32:04.342461  340978 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 23:32:04.342504  340978 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-206716
	I1222 23:32:04.359211  340978 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/multinode-206716/id_rsa Username:docker}
	I1222 23:32:04.456779  340978 ssh_runner.go:195] Run: systemctl --version
	I1222 23:32:04.462781  340978 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 23:32:04.474111  340978 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1222 23:32:04.527738  340978 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:64 SystemTime:2025-12-22 23:32:04.518064401 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-5 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: Support for cgroup v1 is deprecated and planned to be remove
d by no later than May 2029 (https://github.com/moby/moby/issues/51111)] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1222 23:32:04.528515  340978 kubeconfig.go:125] found "multinode-206716" server: "https://192.168.67.2:8443"
	I1222 23:32:04.528549  340978 api_server.go:166] Checking apiserver status ...
	I1222 23:32:04.528616  340978 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1222 23:32:04.541730  340978 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2281/cgroup
	I1222 23:32:04.549682  340978 api_server.go:182] apiserver freezer: "10:freezer:/docker/a6c40ddca3088d2e10f3b9b49ba8f8df967888464ca669b22726af45fa515a5c/kubepods/burstable/pod4fc9f3ffe5c643b20de2592557de912f/ae0ac4a1e7d6b9ce279ff55988e52182fdf8523e4f3a0d4d0c76cc05929d359b"
	I1222 23:32:04.549732  340978 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a6c40ddca3088d2e10f3b9b49ba8f8df967888464ca669b22726af45fa515a5c/kubepods/burstable/pod4fc9f3ffe5c643b20de2592557de912f/ae0ac4a1e7d6b9ce279ff55988e52182fdf8523e4f3a0d4d0c76cc05929d359b/freezer.state
	I1222 23:32:04.556695  340978 api_server.go:204] freezer state: "THAWED"
	I1222 23:32:04.556720  340978 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1222 23:32:04.560696  340978 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1222 23:32:04.560715  340978 status.go:463] multinode-206716 apiserver status = Running (err=<nil>)
	I1222 23:32:04.560724  340978 status.go:176] multinode-206716 status: &{Name:multinode-206716 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1222 23:32:04.560741  340978 status.go:174] checking status of multinode-206716-m02 ...
	I1222 23:32:04.560996  340978 cli_runner.go:164] Run: docker container inspect multinode-206716-m02 --format={{.State.Status}}
	I1222 23:32:04.578458  340978 status.go:371] multinode-206716-m02 host status = "Running" (err=<nil>)
	I1222 23:32:04.578477  340978 host.go:66] Checking if "multinode-206716-m02" exists ...
	I1222 23:32:04.578767  340978 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-206716-m02
	I1222 23:32:04.595232  340978 host.go:66] Checking if "multinode-206716-m02" exists ...
	I1222 23:32:04.595523  340978 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1222 23:32:04.595570  340978 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-206716-m02
	I1222 23:32:04.611785  340978 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32918 SSHKeyPath:/home/jenkins/minikube-integration/22301-72233/.minikube/machines/multinode-206716-m02/id_rsa Username:docker}
	I1222 23:32:04.708503  340978 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1222 23:32:04.720319  340978 status.go:176] multinode-206716-m02 status: &{Name:multinode-206716-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1222 23:32:04.720349  340978 status.go:174] checking status of multinode-206716-m03 ...
	I1222 23:32:04.720621  340978 cli_runner.go:164] Run: docker container inspect multinode-206716-m03 --format={{.State.Status}}
	I1222 23:32:04.737441  340978 status.go:371] multinode-206716-m03 host status = "Stopped" (err=<nil>)
	I1222 23:32:04.737459  340978 status.go:384] host is not running, skipping remaining checks
	I1222 23:32:04.737466  340978 status.go:176] multinode-206716-m03 status: &{Name:multinode-206716-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.28s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206716 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-206716 node start m03 -v=5 --alsologtostderr: (7.743766479s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206716 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.44s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (75.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-206716
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-206716
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-206716: (22.895465403s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-206716 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-206716 --wait=true -v=5 --alsologtostderr: (52.037311344s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-206716
--- PASS: TestMultiNode/serial/RestartKeepsNodes (75.05s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206716 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-206716 node delete m03: (4.713493554s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206716 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.32s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206716 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-206716 stop: (21.742197387s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206716 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-206716 status: exit status 7 (95.332976ms)

                                                
                                                
-- stdout --
	multinode-206716
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-206716-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206716 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-206716 status --alsologtostderr: exit status 7 (96.084361ms)

                                                
                                                
-- stdout --
	multinode-206716
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-206716-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1222 23:33:55.450812  356465 out.go:360] Setting OutFile to fd 1 ...
	I1222 23:33:55.451068  356465 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 23:33:55.451077  356465 out.go:374] Setting ErrFile to fd 2...
	I1222 23:33:55.451081  356465 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 23:33:55.451287  356465 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-72233/.minikube/bin
	I1222 23:33:55.451459  356465 out.go:368] Setting JSON to false
	I1222 23:33:55.451482  356465 mustload.go:66] Loading cluster: multinode-206716
	I1222 23:33:55.451605  356465 notify.go:221] Checking for updates...
	I1222 23:33:55.451827  356465 config.go:182] Loaded profile config "multinode-206716": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1222 23:33:55.451842  356465 status.go:174] checking status of multinode-206716 ...
	I1222 23:33:55.452261  356465 cli_runner.go:164] Run: docker container inspect multinode-206716 --format={{.State.Status}}
	I1222 23:33:55.471555  356465 status.go:371] multinode-206716 host status = "Stopped" (err=<nil>)
	I1222 23:33:55.471575  356465 status.go:384] host is not running, skipping remaining checks
	I1222 23:33:55.471614  356465 status.go:176] multinode-206716 status: &{Name:multinode-206716 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1222 23:33:55.471655  356465 status.go:174] checking status of multinode-206716-m02 ...
	I1222 23:33:55.471915  356465 cli_runner.go:164] Run: docker container inspect multinode-206716-m02 --format={{.State.Status}}
	I1222 23:33:55.489019  356465 status.go:371] multinode-206716-m02 host status = "Stopped" (err=<nil>)
	I1222 23:33:55.489039  356465 status.go:384] host is not running, skipping remaining checks
	I1222 23:33:55.489046  356465 status.go:176] multinode-206716-m02 status: &{Name:multinode-206716-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.93s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (50.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-206716 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-206716 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=docker: (49.675764058s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-206716 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (50.27s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (28.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-206716
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-206716-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-206716-m02 --driver=docker  --container-runtime=docker: exit status 14 (71.408298ms)

                                                
                                                
-- stdout --
	* [multinode-206716-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22301
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22301-72233/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-72233/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-206716-m02' is duplicated with machine name 'multinode-206716-m02' in profile 'multinode-206716'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-206716-m03 --driver=docker  --container-runtime=docker
E1222 23:34:53.441707   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-206716-m03 --driver=docker  --container-runtime=docker: (26.011925292s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-206716
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-206716: exit status 80 (292.213404ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-206716 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-206716-m03 already exists in multinode-206716-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-206716-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-206716-m03: (2.174389313s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (28.61s)

                                                
                                    
x
+
TestScheduledStopUnix (99.53s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-249783 --memory=3072 --driver=docker  --container-runtime=docker
E1222 23:35:31.032883   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/addons-268945/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-249783 --memory=3072 --driver=docker  --container-runtime=docker: (26.380832347s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-249783 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1222 23:35:44.940586  372280 out.go:360] Setting OutFile to fd 1 ...
	I1222 23:35:44.940885  372280 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 23:35:44.940897  372280 out.go:374] Setting ErrFile to fd 2...
	I1222 23:35:44.940901  372280 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 23:35:44.941095  372280 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-72233/.minikube/bin
	I1222 23:35:44.941343  372280 out.go:368] Setting JSON to false
	I1222 23:35:44.941435  372280 mustload.go:66] Loading cluster: scheduled-stop-249783
	I1222 23:35:44.941767  372280 config.go:182] Loaded profile config "scheduled-stop-249783": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1222 23:35:44.941848  372280 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/scheduled-stop-249783/config.json ...
	I1222 23:35:44.942051  372280 mustload.go:66] Loading cluster: scheduled-stop-249783
	I1222 23:35:44.942181  372280 config.go:182] Loaded profile config "scheduled-stop-249783": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-249783 -n scheduled-stop-249783
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-249783 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1222 23:35:45.328876  372435 out.go:360] Setting OutFile to fd 1 ...
	I1222 23:35:45.329189  372435 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 23:35:45.329202  372435 out.go:374] Setting ErrFile to fd 2...
	I1222 23:35:45.329207  372435 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 23:35:45.329422  372435 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-72233/.minikube/bin
	I1222 23:35:45.329704  372435 out.go:368] Setting JSON to false
	I1222 23:35:45.329929  372435 daemonize_unix.go:73] killing process 372317 as it is an old scheduled stop
	I1222 23:35:45.330051  372435 mustload.go:66] Loading cluster: scheduled-stop-249783
	I1222 23:35:45.330516  372435 config.go:182] Loaded profile config "scheduled-stop-249783": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1222 23:35:45.330647  372435 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/scheduled-stop-249783/config.json ...
	I1222 23:35:45.330850  372435 mustload.go:66] Loading cluster: scheduled-stop-249783
	I1222 23:35:45.330988  372435 config.go:182] Loaded profile config "scheduled-stop-249783": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1222 23:35:45.337534   75803 retry.go:84] will retry after 0s: open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/scheduled-stop-249783/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-249783 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-249783 -n scheduled-stop-249783
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-249783
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-249783 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1222 23:36:11.213244  373393 out.go:360] Setting OutFile to fd 1 ...
	I1222 23:36:11.213533  373393 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 23:36:11.213544  373393 out.go:374] Setting ErrFile to fd 2...
	I1222 23:36:11.213548  373393 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1222 23:36:11.213785  373393 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22301-72233/.minikube/bin
	I1222 23:36:11.214083  373393 out.go:368] Setting JSON to false
	I1222 23:36:11.214179  373393 mustload.go:66] Loading cluster: scheduled-stop-249783
	I1222 23:36:11.214518  373393 config.go:182] Loaded profile config "scheduled-stop-249783": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1222 23:36:11.214616  373393 profile.go:143] Saving config to /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/scheduled-stop-249783/config.json ...
	I1222 23:36:11.214826  373393 mustload.go:66] Loading cluster: scheduled-stop-249783
	I1222 23:36:11.214946  373393 config.go:182] Loaded profile config "scheduled-stop-249783": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
E1222 23:36:13.716036   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-580825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:172: signal error was:  os: process already finished
E1222 23:36:30.663671   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-580825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-249783
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-249783: exit status 7 (78.990461ms)

                                                
                                                
-- stdout --
	scheduled-stop-249783
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-249783 -n scheduled-stop-249783
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-249783 -n scheduled-stop-249783: exit status 7 (78.949508ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-249783" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-249783
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-249783: (1.649980018s)
--- PASS: TestScheduledStopUnix (99.53s)

                                                
                                    
x
+
TestSkaffold (114.03s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe1996543678 version
skaffold_test.go:63: skaffold version: v2.17.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-356784 --memory=3072 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-356784 --memory=3072 --driver=docker  --container-runtime=docker: (26.175629769s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/Docker_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe1996543678 run --minikube-profile skaffold-356784 --kube-context skaffold-356784 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe1996543678 run --minikube-profile skaffold-356784 --kube-context skaffold-356784 --status-check=true --port-forward=false --interactive=false: (1m8.382269533s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:353: "leeroy-app-5c8699694-vdpsr" [d5645139-95bc-43f2-8fac-bc3bf28facd1] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.004088529s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:353: "leeroy-web-6c768ff5b6-hmswx" [78b8a2c2-154d-44df-a5c3-893f5059dc5e] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.00333607s
helpers_test.go:176: Cleaning up "skaffold-356784" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-356784
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-356784: (2.925032496s)
--- PASS: TestSkaffold (114.03s)

                                                
                                    
x
+
TestInsufficientStorage (12.35s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-483656 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-483656 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (10.008671638s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4fc96649-4f77-47be-8ee6-732116337090","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-483656] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"551e8f31-90c4-4608-bb9e-eb5852b39ec1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22301"}}
	{"specversion":"1.0","id":"39454009-920f-4db9-bd5a-c9a25075ef7f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"abc48bf8-c6a5-4a1d-8ea7-cb1c2ee437f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22301-72233/kubeconfig"}}
	{"specversion":"1.0","id":"de2a5ce3-5034-49eb-bdc0-4a9c9b38f420","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-72233/.minikube"}}
	{"specversion":"1.0","id":"c4ef29ef-f6e8-45f6-a732-a019f8370814","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"033149c0-1841-47e9-b8fe-8812a3785801","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"8007e680-0721-4cd4-a58f-e368101de614","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"e4ab604d-e443-40ae-b0e9-8dc6130398a6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"d0aa0342-bce6-47b7-a743-5412b1538860","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"a147ff1d-d6c5-402a-90fa-525e38e9c125","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"44f83b93-7a05-4a08-9d7f-5b2343a6932b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-483656\" primary control-plane node in \"insufficient-storage-483656\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"1b0f2a17-6674-4730-abc8-a0d756e0a83d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1766394456-22288 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"1605b958-09aa-47f0-bd93-625e75ffe22b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"6d5d312a-9ae2-45ab-a03c-aafde12587ac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-483656 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-483656 --output=json --layout=cluster: exit status 7 (286.776376ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-483656","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-483656","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1222 23:39:02.343923  385949 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-483656" does not appear in /home/jenkins/minikube-integration/22301-72233/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-483656 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-483656 --output=json --layout=cluster: exit status 7 (291.3186ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-483656","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-483656","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1222 23:39:02.635924  386062 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-483656" does not appear in /home/jenkins/minikube-integration/22301-72233/kubeconfig
	E1222 23:39:02.646000  386062 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/insufficient-storage-483656/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-483656" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-483656
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-483656: (1.759943137s)
--- PASS: TestInsufficientStorage (12.35s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (358.97s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.2059116412 start -p running-upgrade-541329 --memory=3072 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.2059116412 start -p running-upgrade-541329 --memory=3072 --vm-driver=docker  --container-runtime=docker: (1m2.989417945s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-541329 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-541329 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m48.60270102s)
helpers_test.go:176: Cleaning up "running-upgrade-541329" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-541329
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-541329: (2.112901942s)
--- PASS: TestRunningBinaryUpgrade (358.97s)

                                                
                                    
x
+
TestMissingContainerUpgrade (86.15s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.2248044290 start -p missing-upgrade-564661 --memory=3072 --driver=docker  --container-runtime=docker
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.2248044290 start -p missing-upgrade-564661 --memory=3072 --driver=docker  --container-runtime=docker: (25.814487026s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-564661
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-564661: (10.457743354s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-564661
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-564661 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-564661 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (42.464340986s)
helpers_test.go:176: Cleaning up "missing-upgrade-564661" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-564661
E1222 23:43:58.592689   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/skaffold-356784/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-564661: (2.168519276s)
--- PASS: TestMissingContainerUpgrade (86.15s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (5.64s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (5.64s)

                                                
                                    
x
+
TestPause/serial/Start (80.97s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-520621 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-520621 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m20.967786989s)
--- PASS: TestPause/serial/Start (80.97s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (364.18s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.2272707078 start -p stopped-upgrade-521826 --memory=3072 --vm-driver=docker  --container-runtime=docker
E1222 23:39:53.441073   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.2272707078 start -p stopped-upgrade-521826 --memory=3072 --vm-driver=docker  --container-runtime=docker: (1m25.134457416s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.2272707078 -p stopped-upgrade-521826 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.2272707078 -p stopped-upgrade-521826 stop: (10.787021258s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-521826 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-521826 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m28.262045214s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (364.18s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (36.99s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-520621 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-520621 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (36.96826714s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (36.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-958290 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-958290 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker: exit status 14 (75.045955ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-958290] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22301
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22301-72233/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22301-72233/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (25.67s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-958290 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-958290 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (25.345571983s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-958290 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (25.67s)

                                                
                                    
x
+
TestPause/serial/Pause (0.49s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-520621 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.49s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.31s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-520621 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-520621 --output=json --layout=cluster: exit status 2 (306.712063ms)

                                                
                                                
-- stdout --
	{"Name":"pause-520621","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-520621","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.31s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.52s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-520621 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.52s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.68s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-520621 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.68s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.25s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-520621 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-520621 --alsologtostderr -v=5: (2.246396228s)
--- PASS: TestPause/serial/DeletePaused (2.25s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (28.45s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (28.394032033s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-520621
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-520621: exit status 1 (17.669764ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-520621: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (28.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-958290 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E1222 23:41:30.659899   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-580825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-958290 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (13.862548954s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-958290 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-958290 status -o json: exit status 2 (329.626806ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-958290","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-958290
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-958290: (1.86565509s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.7s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-958290 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-958290 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (8.701298597s)
--- PASS: TestNoKubernetes/serial/Start (8.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22301-72233/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-958290 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-958290 "sudo systemctl is-active --quiet service kubelet": exit status 1 (290.384346ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:194: (dbg) Done: out/minikube-linux-amd64 profile list: (14.550666195s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:204: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (17.103995056s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.65s)

                                                
                                    
x
+
TestPreload/Start-NoPreload-PullImage (91.38s)

                                                
                                                
=== RUN   TestPreload/Start-NoPreload-PullImage
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-409540 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-409540 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker: (1m16.05270569s)
preload_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-409540 image pull ghcr.io/medyagh/image-mirrors/busybox:latest
preload_test.go:56: (dbg) Done: out/minikube-linux-amd64 -p test-preload-409540 image pull ghcr.io/medyagh/image-mirrors/busybox:latest: (2.722009552s)
preload_test.go:62: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-409540
preload_test.go:62: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-409540: (12.607909692s)
--- PASS: TestPreload/Start-NoPreload-PullImage (91.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-958290
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-958290: (1.470513137s)
--- PASS: TestNoKubernetes/serial/Stop (1.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (9.7s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-958290 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-958290 --driver=docker  --container-runtime=docker: (9.698650059s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (9.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-958290 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-958290 "sudo systemctl is-active --quiet service kubelet": exit status 1 (301.25453ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.30s)

                                                
                                    
x
+
TestPreload/Restart-With-Preload-Check-User-Image (47.96s)

                                                
                                                
=== RUN   TestPreload/Restart-With-Preload-Check-User-Image
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-409540 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
E1222 23:43:38.109779   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/skaffold-356784/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:43:38.115036   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/skaffold-356784/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:43:38.125422   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/skaffold-356784/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:43:38.146269   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/skaffold-356784/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:43:38.186723   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/skaffold-356784/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:43:38.266870   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/skaffold-356784/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:43:38.427574   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/skaffold-356784/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:43:38.748469   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/skaffold-356784/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:43:39.389565   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/skaffold-356784/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:43:40.670424   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/skaffold-356784/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:43:43.231465   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/skaffold-356784/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:43:48.352024   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/skaffold-356784/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-409540 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (47.689731241s)
preload_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-409540 image list
--- PASS: TestPreload/Restart-With-Preload-Check-User-Image (47.96s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.81s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-521826
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (80.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-687073 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-687073 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0: (1m20.665611229s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (80.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (12.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-687073 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [369b9c88-587a-417b-aee6-43b2d4bb05ce] Pending
E1222 23:46:54.089987   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/addons-268945/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "busybox" [369b9c88-587a-417b-aee6-43b2d4bb05ce] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [369b9c88-587a-417b-aee6-43b2d4bb05ce] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 12.003128449s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-687073 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (12.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-687073 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-687073 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (10.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-687073 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-687073 --alsologtostderr -v=3: (10.93690613s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (10.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-687073 -n old-k8s-version-687073
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-687073 -n old-k8s-version-687073: exit status 7 (82.756116ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-687073 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (52.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-687073 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0
E1222 23:47:56.486854   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-687073 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0: (51.873005831s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-687073 -n old-k8s-version-687073
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (52.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-q2jvp" [8fb40001-71e2-4398-969f-67708d2d4541] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004135074s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-q2jvp" [8fb40001-71e2-4398-969f-67708d2d4541] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003650072s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-687073 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-687073 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-687073 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-687073 -n old-k8s-version-687073
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-687073 -n old-k8s-version-687073: exit status 2 (348.424901ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-687073 -n old-k8s-version-687073
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-687073 -n old-k8s-version-687073: exit status 2 (332.424603ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-687073 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-687073 -n old-k8s-version-687073
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-687073 -n old-k8s-version-687073
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.90s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (68.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-142613 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.3
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-142613 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.3: (1m8.381997588s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (68.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (39.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-700304 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.3
E1222 23:48:38.110000   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/skaffold-356784/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:49:05.795166   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/skaffold-356784/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-700304 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.3: (39.206028572s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (39.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-700304 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [75212cd2-8627-4bbc-a849-31fe74ed68b8] Pending
helpers_test.go:353: "busybox" [75212cd2-8627-4bbc-a849-31fe74ed68b8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [75212cd2-8627-4bbc-a849-31fe74ed68b8] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.00336623s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-700304 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-700304 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-700304 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-700304 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-700304 --alsologtostderr -v=3: (11.050643251s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-700304 -n default-k8s-diff-port-700304
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-700304 -n default-k8s-diff-port-700304: exit status 7 (85.4739ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-700304 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (54.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-700304 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.3
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-700304 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.3: (54.309242667s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-700304 -n default-k8s-diff-port-700304
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (54.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (12.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-142613 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [23c9021a-6be5-448d-aecf-6f9dd29ee6ac] Pending
helpers_test.go:353: "busybox" [23c9021a-6be5-448d-aecf-6f9dd29ee6ac] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [23c9021a-6be5-448d-aecf-6f9dd29ee6ac] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 12.002903405s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-142613 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (12.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.8s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-142613 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-142613 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.80s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-142613 --alsologtostderr -v=3
E1222 23:49:53.440951   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-384766/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-142613 --alsologtostderr -v=3: (11.022472422s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-142613 -n embed-certs-142613
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-142613 -n embed-certs-142613: exit status 7 (80.476265ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-142613 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (49.54s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-142613 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.3
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-142613 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.3: (49.211284664s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-142613 -n embed-certs-142613
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (49.54s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-d9f67" [52b9adab-1f95-418a-9a35-c138ec49aa91] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003402817s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-d9f67" [52b9adab-1f95-418a-9a35-c138ec49aa91] Running
E1222 23:50:31.033553   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/addons-268945/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004019544s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-700304 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-700304 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.69s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-700304 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-700304 -n default-k8s-diff-port-700304
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-700304 -n default-k8s-diff-port-700304: exit status 2 (326.688451ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-700304 -n default-k8s-diff-port-700304
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-700304 -n default-k8s-diff-port-700304: exit status 2 (316.496801ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-700304 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-700304 -n default-k8s-diff-port-700304
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-700304 -n default-k8s-diff-port-700304
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.69s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-56qr8" [585a6ed0-8bb3-46e7-bb02-115c6025b747] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003422676s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-56qr8" [585a6ed0-8bb3-46e7-bb02-115c6025b747] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.002944736s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-142613 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-142613 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-142613 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-142613 -n embed-certs-142613
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-142613 -n embed-certs-142613: exit status 2 (323.605327ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-142613 -n embed-certs-142613
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-142613 -n embed-certs-142613: exit status 2 (309.475188ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-142613 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-142613 -n embed-certs-142613
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-142613 -n embed-certs-142613
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (64.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-003676 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
E1222 23:51:30.660363   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-580825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:51:53.431117   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/old-k8s-version-687073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:51:53.436502   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/old-k8s-version-687073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:51:53.446878   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/old-k8s-version-687073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:51:53.467131   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/old-k8s-version-687073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:51:53.507444   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/old-k8s-version-687073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:51:53.587797   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/old-k8s-version-687073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:51:53.748235   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/old-k8s-version-687073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:51:54.068842   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/old-k8s-version-687073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:51:54.709512   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/old-k8s-version-687073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:51:55.990005   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/old-k8s-version-687073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:51:58.551173   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/old-k8s-version-687073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:52:03.671948   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/old-k8s-version-687073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-003676 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m4.963371493s)
--- PASS: TestNetworkPlugins/group/auto/Start (64.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-003676 "pgrep -a kubelet"
I1222 23:52:09.653373   75803 config.go:182] Loaded profile config "auto-003676": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-003676 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-gn9t4" [5f97e107-0212-4e18-b851-754839713999] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-gn9t4" [5f97e107-0212-4e18-b851-754839713999] Running
E1222 23:52:13.912994   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/old-k8s-version-687073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003746368s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-003676 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-003676 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-003676 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (50.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-003676 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
E1222 23:52:53.716349   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-580825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:53:15.354995   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/old-k8s-version-687073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-003676 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (50.349385289s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (50.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-2c5t7" [062da9ca-3df9-477b-a81d-7b7f7c9d9611] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003751881s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-003676 "pgrep -a kubelet"
I1222 23:53:35.583345   75803 config.go:182] Loaded profile config "kindnet-003676": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (8.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-003676 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-w6rrf" [c1032a02-806d-4779-963d-71f408d6172b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1222 23:53:38.109992   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/skaffold-356784/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-w6rrf" [c1032a02-806d-4779-963d-71f408d6172b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 8.004255953s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (8.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-003676 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-003676 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-003676 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-003676 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
E1222 23:54:06.387225   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/default-k8s-diff-port-700304/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:54:06.392507   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/default-k8s-diff-port-700304/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:54:06.402775   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/default-k8s-diff-port-700304/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:54:06.423065   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/default-k8s-diff-port-700304/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:54:06.463332   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/default-k8s-diff-port-700304/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:54:06.543668   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/default-k8s-diff-port-700304/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:54:06.704117   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/default-k8s-diff-port-700304/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:54:07.024762   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/default-k8s-diff-port-700304/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:54:07.665963   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/default-k8s-diff-port-700304/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:54:08.946824   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/default-k8s-diff-port-700304/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-003676 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m6.996584442s)
--- PASS: TestNetworkPlugins/group/calico/Start (67.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-r2mrc" [748b5792-9da1-4faa-90dc-5fe158ffc441] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003597453s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-003676 "pgrep -a kubelet"
I1222 23:55:16.425703   75803 config.go:182] Loaded profile config "calico-003676": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-003676 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-wscpj" [735d5685-fc70-4d9b-ac9f-37ee181ddccf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-wscpj" [735d5685-fc70-4d9b-ac9f-37ee181ddccf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004105381s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-003676 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-003676 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-003676 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (45.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-003676 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-003676 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (45.267193113s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (45.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (1.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-063943 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-063943 --alsologtostderr -v=3: (1.316414208s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (1.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-063943 -n no-preload-063943
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-063943 -n no-preload-063943: exit status 7 (102.18192ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-063943 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-003676 "pgrep -a kubelet"
I1222 23:56:32.143723   75803 config.go:182] Loaded profile config "custom-flannel-003676": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-003676 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-t2x2p" [4138dc25-8a8d-49c2-8dbb-fec591d1f0ae] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-t2x2p" [4138dc25-8a8d-49c2-8dbb-fec591d1f0ae] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.002921611s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-003676 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-003676 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-003676 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (66.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-003676 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-003676 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m6.921046947s)
--- PASS: TestNetworkPlugins/group/false/Start (66.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (63.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-003676 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
E1222 23:57:09.814106   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/auto-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:57:09.819395   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/auto-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:57:09.829708   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/auto-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:57:09.850102   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/auto-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:57:09.890445   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/auto-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:57:09.970795   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/auto-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:57:10.131235   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/auto-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:57:10.451811   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/auto-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:57:11.092349   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/auto-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:57:12.372631   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/auto-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:57:14.932861   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/auto-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:57:20.053851   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/auto-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:57:21.116940   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/old-k8s-version-687073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:57:30.294845   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/auto-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 23:57:50.775078   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/auto-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-003676 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (1m3.814188573s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (63.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-003676 "pgrep -a kubelet"
I1222 23:58:06.375855   75803 config.go:182] Loaded profile config "enable-default-cni-003676": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-003676 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-ch99r" [b913d348-3c03-44c6-a045-a09fab21b864] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-ch99r" [b913d348-3c03-44c6-a045-a09fab21b864] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003857424s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-003676 "pgrep -a kubelet"
I1222 23:58:08.868441   75803 config.go:182] Loaded profile config "false-003676": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-003676 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-j8lfl" [a2d94250-976c-4af4-99f8-4d899312bcdc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-j8lfl" [a2d94250-976c-4af4-99f8-4d899312bcdc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 10.003867695s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-003676 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-003676 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-003676 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-003676 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-003676 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-003676 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (45.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-003676 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-003676 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (45.347234707s)
--- PASS: TestNetworkPlugins/group/flannel/Start (45.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (42.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-003676 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
E1222 23:58:49.768934   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/kindnet-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-003676 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (42.727493996s)
--- PASS: TestNetworkPlugins/group/bridge/Start (42.73s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-q9rkm" [c3af4a76-568d-40fb-982b-b47f150b4dbc] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00370948s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-003676 "pgrep -a kubelet"
I1222 23:59:23.811092   75803 config.go:182] Loaded profile config "bridge-003676": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-003676 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-7w2kj" [d7859fa4-039c-4bf8-b0f4-0e930b2d1bf3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-7w2kj" [d7859fa4-039c-4bf8-b0f4-0e930b2d1bf3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003158984s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-003676 "pgrep -a kubelet"
I1222 23:59:28.289154   75803 config.go:182] Loaded profile config "flannel-003676": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (8.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-003676 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-2pdl5" [e41500a2-3ba9-497b-899e-14853e144c74] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-2pdl5" [e41500a2-3ba9-497b-899e-14853e144c74] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 8.004120466s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (8.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-003676 exec deployment/netcat -- nslookup kubernetes.default
E1222 23:59:34.070525   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/default-k8s-diff-port-700304/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-003676 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-003676 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-003676 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-003676 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-003676 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (66.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-003676 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-003676 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m6.927229028s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (66.93s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs (14.72s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs
preload_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-dl-gcs-022654 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=docker
E1223 00:00:01.156367   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/skaffold-356784/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:00:10.134016   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/calico-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:00:10.139376   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/calico-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:00:10.149688   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/calico-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:00:10.169924   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/calico-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:00:10.210257   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/calico-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:00:10.290685   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/calico-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:00:10.451585   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/calico-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:00:10.772207   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/calico-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:00:11.413157   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/calico-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:00:12.693653   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/calico-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-dl-gcs-022654 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=docker: (14.491284905s)
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-022654" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-dl-gcs-022654
--- PASS: TestPreload/PreloadSrc/gcs (14.72s)

                                                
                                    
x
+
TestPreload/PreloadSrc/github (19.27s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/github
preload_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-dl-github-988233 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=docker
E1223 00:00:15.254723   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/calico-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:00:20.374996   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/calico-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:00:30.616081   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/calico-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:00:31.033748   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/addons-268945/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-dl-github-988233 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=docker: (19.040532088s)
helpers_test.go:176: Cleaning up "test-preload-dl-github-988233" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-dl-github-988233
--- PASS: TestPreload/PreloadSrc/github (19.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-348344 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-348344 --alsologtostderr -v=3: (1.349494117s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.35s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs-cached (0.68s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs-cached
preload_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-dl-gcs-cached-916639 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=docker
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-cached-916639" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-dl-gcs-cached-916639
--- PASS: TestPreload/PreloadSrc/gcs-cached (0.68s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-348344 -n newest-cni-348344
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-348344 -n newest-cni-348344: exit status 7 (95.256772ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-348344 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-003676 "pgrep -a kubelet"
I1223 00:01:01.244654   75803 config.go:182] Loaded profile config "kubenet-003676": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-003676 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-4tzxv" [8020250e-a4b4-4b1b-af1c-d59475c025a5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-4tzxv" [8020250e-a4b4-4b1b-af1c-d59475c025a5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 9.003254219s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-003676 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-003676 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-003676 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.13s)
E1223 00:01:30.659805   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/functional-580825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:01:32.058459   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/calico-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:01:32.330895   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/custom-flannel-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:01:32.336187   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/custom-flannel-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:01:32.346440   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/custom-flannel-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:01:32.366751   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/custom-flannel-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:01:32.407036   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/custom-flannel-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:01:32.487378   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/custom-flannel-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:01:32.648040   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/custom-flannel-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:01:32.968819   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/custom-flannel-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:01:33.609348   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/custom-flannel-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:01:34.889843   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/custom-flannel-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:01:37.450953   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/custom-flannel-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:01:42.571833   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/custom-flannel-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:01:52.812434   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/custom-flannel-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:01:53.430822   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/old-k8s-version-687073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:02:09.814500   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/auto-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1223 00:02:13.293212   75803 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/custom-flannel-003676/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-348344 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    

Test skip (29/436)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.3/cached-images 0
15 TestDownloadOnly/v1.34.3/binaries 0
16 TestDownloadOnly/v1.34.3/kubectl 0
23 TestDownloadOnly/v1.35.0-rc.1/cached-images 0
24 TestDownloadOnly/v1.35.0-rc.1/binaries 0
25 TestDownloadOnly/v1.35.0-rc.1/kubectl 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
117 TestFunctional/parallel/PodmanEnv 0
149 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
150 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
151 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
213 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv 0
256 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig 0
257 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
258 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS 0
263 TestGvisorAddon 0
292 TestImageBuild/serial/validateImageBuildWithBuildEnv 0
293 TestISOImage 0
357 TestChangeNoneUser 0
360 TestScheduledStopWindows 0
379 TestStartStop/group/disable-driver-mounts 0.2
401 TestNetworkPlugins/group/cilium 3.91
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:765: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-834773" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-834773
--- SKIP: TestStartStop/group/disable-driver-mounts (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-003676 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-003676

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-003676

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-003676

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-003676

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-003676

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-003676

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-003676

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-003676

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-003676

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-003676

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-003676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003676"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-003676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003676"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-003676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003676"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-003676

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-003676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003676"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-003676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003676"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-003676" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-003676" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-003676" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-003676" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-003676" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-003676" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-003676" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-003676" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-003676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003676"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-003676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003676"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-003676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003676"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-003676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003676"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-003676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003676"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-003676

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-003676

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-003676" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-003676" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-003676

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-003676

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-003676" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-003676" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-003676" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-003676" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-003676" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-003676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003676"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-003676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003676"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-003676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003676"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-003676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003676"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-003676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003676"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 22 Dec 2025 23:41:21 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: NoKubernetes-958290
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 22 Dec 2025 23:40:47 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: running-upgrade-541329
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22301-72233/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 22 Dec 2025 23:40:56 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.112.2:8443
name: stopped-upgrade-521826
contexts:
- context:
cluster: NoKubernetes-958290
extensions:
- extension:
last-update: Mon, 22 Dec 2025 23:41:21 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-958290
name: NoKubernetes-958290
- context:
cluster: running-upgrade-541329
user: running-upgrade-541329
name: running-upgrade-541329
- context:
cluster: stopped-upgrade-521826
user: stopped-upgrade-521826
name: stopped-upgrade-521826
current-context: NoKubernetes-958290
kind: Config
users:
- name: NoKubernetes-958290
user:
client-certificate: /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/NoKubernetes-958290/client.crt
client-key: /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/NoKubernetes-958290/client.key
- name: running-upgrade-541329
user:
client-certificate: /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/running-upgrade-541329/client.crt
client-key: /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/running-upgrade-541329/client.key
- name: stopped-upgrade-521826
user:
client-certificate: /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/stopped-upgrade-521826/client.crt
client-key: /home/jenkins/minikube-integration/22301-72233/.minikube/profiles/stopped-upgrade-521826/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-003676

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-003676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003676"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-003676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003676"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-003676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003676"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-003676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003676"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-003676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003676"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-003676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003676"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-003676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003676"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-003676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003676"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-003676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003676"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-003676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003676"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-003676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003676"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-003676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003676"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-003676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003676"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-003676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003676"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-003676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003676"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-003676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003676"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-003676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003676"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-003676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003676"

                                                
                                                
----------------------- debugLogs end: cilium-003676 [took: 3.720491657s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-003676" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-003676
--- SKIP: TestNetworkPlugins/group/cilium (3.91s)

                                                
                                    
Copied to clipboard